Bash with AWS CLI - unable to locate credentials - linux

I have a shell script which is supposed to download some files from S3 and mount an ebs drive. However, I always end up with "Unable to locate credentials".
I have specified my credentials with the aws configure command and the commands work outside the shell script. Could somebody, please, tell me (preferably in detail) how to make it work?
This is my script
#!/bin/bash
AWS_CONFIG_FILE="~/.aws/config"
echo $1
sudo mkfs -t ext4 $1
sudo mkdir /s3-backup-test
sudo chmod -R ugo+rw /s3-backup-test
sudo mount $1 /s3-backup-test
sudo aws s3 sync s3://backup-test-s3 /s3-backup/test
du -h /s3-backup-test
ipt (short version):
Thanks for any help!

sudo will change the $HOME directory (and therefore ~) to /root, and remove most bash variables like AWS_CONFIG_FILE from the environment. Make sure you do everything with aws as root or as your user, dont mix.
Make sure you did sudo aws configure for example. And try
sudo bash -c 'AWS_CONFIG_FILE=/root/.aws/config aws s3 sync s3://backup-test-s3 /s3-backup/test'
You might prefer to remove all the sudo from inside the script, and just sudo the script itself.

While you might have your credentials and config file properly located in ~/.aws, it might not be getting picked up by your user account.
Run this command to see if your credentials have been set:aws configure list
To set the credentials, run this command: aws configure and then enter the credentials that are specified in your ~/.aws/credentials file.

The unable to locate credentials error usually occurs when working with different aws profiles and the current terminal can't identify the credentials for the current profile.
Notice that you don't need to fill all the credentials via aws configure each time - you just need to reference to the relevant profile that was configured once.
From the Named profiles section in AWS docs:
The AWS CLI supports using any of multiple named profiles that are
stored in the config and credentials files. You can configure
additional profiles by using aws configure with the --profile option,
or by adding entries to the config and credentials files.
The following example shows a credentials file with two profiles. The
first [default] is used when you run a CLI command with no profile.
The second is used when you run a CLI command with the --profile user1
parameter.
~/.aws/credentials (Linux & Mac) or %USERPROFILE%\.aws\credentials (Windows):
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
So, after setting up the specific named profile (user1 in the example above) via aws configure or directly in the ~/.aws/credentials file you can select the specific profile:
aws ec2 describe-instances --profile user1
Or export it to terminal:
$ export AWS_PROFILE=user1

Answering in case someone stumbles across this based on the question's title.
I had the same problem where by the AWS CLI was reporting unable to locate credentials.
I had removed the [default] set of credentials from my credentials file as I wasn't using them and didn't think they were needed. It seems that they are.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2

This isn't necessarily related to the original question, but I came across this when googling a related issue, so I'm going to write it up in case it may help anyone else. I set up aws on a specific user, and tested using sudo -H -u thatuser aws ..., but it didn't work with awscli 1.2.9 installed on Ubuntu 14.04:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
I had to upgrade it using pip install awscli, which brought in newer versions of awscli (1.11.93), boto, and a myriad of other stuff (awscli docutils botocore rsa s3transfer jmespath python-dateutil pyasn1 futures), but it resulted in things starting to work properly:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************WXYZ shared-credentials-file
secret_key ****************wxyz shared-credentials-file
region us-east-1 config-file ~/.aws/config

A foolish and cautionary tail of a rusty script slinger:
I had defined the variable HOME in my script as a place were the script should go to build the platform.
This variable overwrote the env var that defines the shell users $HOME. So the AWS command could not find ~/.aws/credentials because ~ was referencing the wrong place.
I hate to admit it, but I hope it helps saves someone some time.

Was hitting this error today when running aws cli on EC2. My situations is I could get credentials info when running aws configure list. However I am running in a corporate environment that doing things like aws kms decrypt requires PROXY. As soon as I set proxy, the aws credentials info will be gone.
export HTTP_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
export HTTPS_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
Turns out I also have to set NO_PROXY and have the ec2 metadata address in the list 169.254.169.254. Also, since you should be going via an s3 endpoint, you should normally have .amazonaws.com in the no_proxy too.
export NO_PROXY=169.254.169.254,.amazonaws.com

If you are using a .aws/config file with roles ensure sure your config file is correctly formatted. In my case I had forgotten to put the role_arn = in front of the arn. The default profile sits in the .aws/credentials file and contains the access key id and secret access key of the iam identity.
The config file contains the role details:
[profile myrole]
role_arn = arn:aws:iam::123456789012:role/My-Role
source_profile = default
mfa_serial = arn:aws:iam::987654321098:mfa/my-iam-identity
region=ap-southeast-2
You can quickly test access by calling
aws sts get-caller-identity --profile myrole
If you have MFA enabled like I have you will need to enter it when prompted.
Enter MFA code for arn:aws:iam::987654321098:mfa/my-iam-identity:
{
"UserId": "ARABCDEFGHIJKLMNOPQRST:botocore-session-15441234567",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/My-Role/botocore-session-15441234567"
}

I ran into this trying to run an aws-cli command from roots cron.
Since credentials are stored in $HOME/.aws/credentials and I initialized aws-cli through sudo, $HOME is still /home/user/. When running from cron, $HOME is /root/ and thus cron cannot find the file.
The fix was to change $HOME for the specific cron job. Example:
00 12 * * * HOME=/home/user aws s3 sync s3://...
(alternatives includes moving, copying or symlinking the .aws dir, from /home/user/ to /root/)

try adding sudo with aws command like sudo aws ec2 command and yes as meuh mentioned the awscli needs to be configured using sudo

pip install --upgrade awscli
or
pip3 install --upgrade awscli

Related

python fabric3 executing boto3 functionality on remote ec2 instance

I am trying to execute boto3 functionality from local machine via python fabric3 scripts.
Configuration on local machine:
installed python3.5 fabric3
script using fabric3 to create aws rds snapshot.
ssh auth store in ssh-add ~/.ssh/ec2.pem
Configuration on aws EC2 instance:
created ~/.aws/config and ~/.aws/credentials ans store required config like:
a. region=, output in ~/.aws/config aws_access_key_id=,
b. aws_secret_access_key in ~/.aws/credentials
rds is open to ec2 instance only.
Observation:
while executing fabric script from local machine it ask for botocore.exceptions.NoRegionError: You must specify a region.
if I provides region name while in boto3.client(region_name='')
it will ask for
botocore.exceptions.NoCredentialsError: Unable to locate credentials
means python fabric doesn't pick up the ~/.aws/config and ~/.aws/credentials files.
1. Does python fabric pickup the credential & configs from ~/.aws? because I don't want to provides the credential via fabric script
2. What is the standard way to achieve the fabric based deployment on aws-ec2
For time-being while creating boto3.client() I passed the required aws_access_key_id aws_secret_access_key, but still problem is that why boto3 client not picking up the .aws/config and .aws/cred files while triggered via Python Fabric script.

Run ansible-playbook with a user-data script on an EC2 instance

I am using Packer with Ansible to create an AWS EC2 image (AMI). Ansible is used to install Java 8, install the database (Cassandra), install Ansible and upload an Ansible playbook (I know that I should push the playbook to git and pull it but I will do it when this is working). I am installing Ansible and uploading the playbook, because I have to change some of the Cassandra properties when an instance is launched from the AMI (add the current instance IP in the Cassandra options for example). In order to accomplish this I wrote a simple bash script, that is added as the user-data-file property. This is the script:
#cloud-boothook
#!/bin/bash
#cloud-config
output: {all: '| tee -a /var/log/cloud-init-output.log'}
ansible-playbook -i "localhost," -c local /usr/local/etc/replace_cassandra.yaml
As you can see I am executing the ansible-playbook in a localhost mode.
The problem is that when I start the instance, I am finding an error inside the /var/log/cloud-init.log file. The error states, that ansible-playbook could not be found. So I added an ls line inside the user-data script to check the content of the /usr/bin/ folder (the folder where Ansible is installed) and there were no Ansible in it, but when I access the instance with ssh I can see that Ansible is present inside the /usr/bin/ folder and there is no problem executing the ansible-playbook.
Has anyone encountered a similar problem? I think that this should be a quite popular use case for Ansible with EC2.
EDIT
After some logging I found out that not only there is no Ansible, during the execution of the user data, but the database is missing as well.
Is it possible, that some of the code (or all of it) in the Ansible provisioner in Packer, is executed when the instance is launched?
EDIT2
I have found out what is happening here. When I add the user data via packer trough the user_data_file property, the user data is executed when packer lunches an instance to build the AMI. The script is launched before the Ansible provisioner is executed, and that is why Ansible is missing.
What I want to do is to automatically add a user data to the AMI, so that when an instance is launched from the AMI, the user data will be executed then, and not when packer builds the said AMI.
Any ideas on how to do this?
Just run multiple provisioners and don't try to run ansible via cloud-init.
I'm making an assumption here that your playbook and roles are stored locally where you are starting the packer run from. Instead of shoehorning the ansible stuff into user data, run a shell provisioner to install ansbile, run the ansible-local provisioner to run the playbook/role you want.
Below is a simplified example of what I'm talking about. It won't run without some more values in the builder config but I left those out for the sake of brevity.
In the example json, the install-prereqs.sh just adds the ansible ppa apt repo and runs apt-get update -y, then installs ansible.
#!/bin/bash
sudo apt-get install software-properties-common
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
The second provisioner will then copy the playbook and roles you specify to the target host and run them.
{
"builders": [
{
"type": "amazon-ebs",
"ssh_username": "ubuntu",
"image_name": "some-name",
"source_image": "some-ami-id",
"ssh_pty": true
}
],
"provisioners": [
{
"type": "shell",
"script": "scripts/install-prereqs.sh"
},
{
"type": "ansible-local",
"playbook_file": "path/to/playbook.yml",
"role_paths": ["path/to/roles"]
},
]
}
This is possible! Please make sure of the following.
An Ansible server (install ansible via cloud formation userdata if not built into AMI) and your target have SSH access in the security groups you create in cloudformation.
After you install ansible on the ansible server, your ansible.cfg file points to a private key on the ansible server
The matching public key for the ansible private key is copied to the authorized_keys file on the servers in the root user .ssh directory you wish to run playbooks on
-You have enabled root ssh access between the ansible server and target server(s), this can be done by editing the the /etc/ssh/sshd_config file and making sure there is nothing preventing the SSH access from the root user in the root authorized_keys file on the target server(s)

Git push/pull fails on GitLab in Google Compute Engine

I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"

Bash script to install AWS CLI tools

I am writing a bash script that will automatically install and configure AWS CLI tools. I am able to install AWS CLI tools but unable to configure it.
My script is something like this:
#!/bin/bash
wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
./awscli-bundle/install -b ~/bin/aws
./awscli-bundle/install -h
aws configure
AWS Access Key ID [None]: ABCDEFGHIJKLMNOP ## unable to provide this data
AWS Secret Access Key [None]: xbdwsdADDS/ssfsfa/afzfASADQASAd ## unable to provide this data
Default region name [None]: us-west-2 ## unable to provide this data
Default output format [None]: json ## unable to provide this data
I wish to do the configuration using this script too. I wish that I can provide these credentials via script so that it prevents manual entry. How can this be done?
Use a configuration file rather than the aws configure command. Create a file called ~/.aws/config that looks like this:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region=us-west-2
output=json
More info in the docs.
the best practice is to install the awscli utility by BASH and copy the file from your own specified location of 2 files
without hitting
#aws configure
command these files will not get created, you can copy and paste the files using bash script and get all the execution done
~/.aws/credintials
~/.aws/config
where credentials contains
[default]
aws_access_key_id=ABCDEFGHIJKLMNOP
aws_secret_access_key=xbdwsdADDS/ssfsfa/afzfASADQASAd
and config file contains
[default]
output=json
region=us-west-2
This will help you to keep the keys at one place and you can also push the same for your execution for any CMT tool as well like Ansible.
you additionally configure this from the command line which will create the configuration file
aws configure set aws_access_key_id ABCDEFGHIJKLMNOP
aws configure set aws_secret_access_key xbdwsdADDS/ssfsfa/afzfASADQASAd
aws configure set default.region eu-west-1

How to reset Jenkins security settings from the command line?

Is there a way to reset all (or just disable the security settings) from the command line without a user/password as I have managed to completely lock myself out of Jenkins?
The simplest solution is to completely disable security - change true to false in /var/lib/jenkins/config.xml file.
<useSecurity>true</useSecurity>
A one-liner to achieve the same:
sed -i 's/<useSecurity>true<\/useSecurity>/<useSecurity>false<\/useSecurity>/g' /var/lib/jenkins/config.xml
Then just restart Jenkins:
sudo service jenkins restart
And then go to admin panel and set everything once again.
If you in case are running your Jenkins inside a Kubernetes pod and can not run service command, then you can just restart Jenkins by deleting the pod:
kubectl delete pod <jenkins-pod-name>
Once the command was issued, Kubernetes will terminate the old pod and start a new one.
One other way would be to manually edit the configuration file for your user (e.g. /var/lib/jenkins/users/username/config.xml) and update the contents of passwordHash:
<passwordHash>#jbcrypt:$2a$10$razd3L1aXndFfBNHO95aj.IVrFydsxkcQCcLmujmFQzll3hcUrY7S</passwordHash>
Once you have done this, just restart Jenkins and log in using this password:
test
The <passwordHash> element in users/<username>/config.xml will accept data of the format
salt:sha256("password{salt}")
So, if your salt is bar and your password is foo then you can produce the SHA256 like this:
echo -n 'foo{bar}' | sha256sum
You should get 7f128793bc057556756f4195fb72cdc5bd8c5a74dee655a6bfb59b4a4c4f4349 as the result. Take the hash and put it with the salt into <passwordHash>:
<passwordHash>bar:7f128793bc057556756f4195fb72cdc5bd8c5a74dee655a6bfb59b4a4c4f4349</passwordHash>
Restart Jenkins, then try logging in with password foo. Then reset your password to something else. (Jenkins uses bcrypt by default, and one round of SHA256 is not a secure way to store passwords. You'll get a bcrypt hash stored when you reset your password.)
I found the file in question located in /var/lib/jenkins called config.xml, modifying that fixed the issue.
In El-Capitan config.xml can not be found at
/var/lib/jenkins/
Its available in
~/.jenkins
then after that as other mentioned open the config.xml file and make the following changes
In this replace <useSecurity>true</useSecurity> with <useSecurity>false</useSecurity>
Remove <authorizationStrategy> and <securityRealm>
Save it and restart the jenkins(sudo service jenkins restart)
The answer on modifying was correct. Yet, I think it should be mentioned that /var/lib/jenkins/config.xml looks something like this if you have activated "Project-based Matrix Authorization Strategy". Deleting /var/lib/jenkins/config.xml and restarting jenkins also does the trick. I also deleted the users in /var/lib/jenkins/users to start from scratch.
<authorizationStrategy class="hudson.security.ProjectMatrixAuthorizationStrategy">
<permission>hudson.model.Computer.Configure:jenkins-admin</permission>
<permission>hudson.model.Computer.Connect:jenkins-admin</permission>
<permission>hudson.model.Computer.Create:jenkins-admin</permission>
<permission>hudson.model.Computer.Delete:jenkins-admin</permission>
<permission>hudson.model.Computer.Disconnect:jenkins-admin</permission>
<!-- if this is missing for your user and it is the only one, bad luck -->
<permission>hudson.model.Hudson.Administer:jenkins-admin</permission>
<permission>hudson.model.Hudson.Read:jenkins-admin</permission>
<permission>hudson.model.Hudson.RunScripts:jenkins-admin</permission>
<permission>hudson.model.Item.Build:jenkins-admin</permission>
<permission>hudson.model.Item.Cancel:jenkins-admin</permission>
<permission>hudson.model.Item.Configure:jenkins-admin</permission>
<permission>hudson.model.Item.Create:jenkins-admin</permission>
<permission>hudson.model.Item.Delete:jenkins-admin</permission>
<permission>hudson.model.Item.Discover:jenkins-admin</permission>
<permission>hudson.model.Item.Read:jenkins-admin</permission>
<permission>hudson.model.Item.Workspace:jenkins-admin</permission>
<permission>hudson.model.View.Configure:jenkins-admin</permission>
<permission>hudson.model.View.Create:jenkins-admin</permission>
<permission>hudson.model.View.Delete:jenkins-admin</permission>
<permission>hudson.model.View.Read:jenkins-admin</permission>
</authorizationStrategy>
We can reset the password while leaving security on.
The config.xml file in /var/lib/Jenkins/users/admin/ acts sort of like the /etc/shadow file Linux or UNIX-like systems or the SAM file in Windows, in the sense that it stores the hash of the account's password.
If you need to reset the password without logging in, you can edit this file and replace the old hash with a new one generated from bcrypt:
$ pip install bcrypt
$ python
>>> import bcrypt
>>> bcrypt.hashpw("yourpassword", bcrypt.gensalt(rounds=10, prefix=b"2a"))
'YOUR_HASH'
This will output your hash, with prefix 2a, the correct prefix for Jenkins hashes.
Now, edit the config.xml file:
...
<passwordHash>#jbcrypt:REPLACE_THIS</passwordHash>
...
Once you insert the new hash, reset Jenkins:
(if you are on a system with systemd):
sudo systemctl restart Jenkins
You can now log in, and you didn't leave your system open for a second.
To disable Jenkins security in simple steps in Linux, run these commands:
sudo ex +g/useSecurity/d +g/authorizationStrategy/d -scwq /var/lib/jenkins/config.xml
sudo /etc/init.d/jenkins restart
It will remove useSecurity and authorizationStrategy lines from your config.xml root config file and restart your Jenkins.
See also: Disable security at Jenkins website
After gaining the access to Jenkins, you can re-enable security in your Configure Global Security page by choosing the Access Control/Security Realm. After than don't forget to create the admin user.
To reset it without disabling security if you're using matrix permissions (probably easily adaptable to other login methods):
In config.xml, set disableSignup to false.
Restart Jenkins.
Go to the Jenkins web page and sign up with a new user.
In config.xml, duplicate one of the <permission>hudson.model.Hudson.Administer:username</permission> lines and replace username with the new user.
If it's a private server, set disableSignup back to true in config.xml.
Restart Jenkins.
Go to the Jenkins web page and log in as the new user.
Reset the password of the original user.
Log in as the original user.
Optional cleanup:
Delete the new user.
Delete the temporary <permission> line in config.xml.
No securities were harmed during this answer.
On the offchance you accidentally lock yourself out of Jenkins due to a permission mistake, and you dont have server-side access to switch to the jenkins user or root... You can make a job in Jenkins and add this to the Shell Script:
sed -i 's/<useSecurity>true/<useSecurity>false/' ~/config.xml
Then click Build Now and restart Jenkins (or the server if you need to!)
\.jenkins\secrets\initialAdminPassword
Copy the password from the initialAdminPassword file and paste it into the Jenkins.
1 first check location if you install war or Linux or windows based on that
for example if war under Linux and for admin user
/home/"User_NAME"/.jenkins/users/admin/config.xml
go to this tag after #jbcrypt:
<passwordHash>#jbcrypt:$2a$10$3DzCGLQr2oYXtcot4o0rB.wYi5kth6e45tcPpRFsuYqzLZfn1pcWK</passwordHash>
change this password using use any website for bcrypt hash generator
https://www.dailycred.com/article/bcrypt-calculator
make sure it start with $2a cause this one jenkens uses
In order to remove the by default security for jenkins in Windows OS,
You can traverse through the file Config.xml created inside /users/{UserName}/.jenkins.
Inside this file you can change the code from
<useSecurity>true</useSecurity>
To,
<useSecurity>false</useSecurity>
step-1 : go to the directory cd .jenkins/secrets then you will get a 'initialAdminPassword'.
step-2 : nano initialAdminPassword
you will get a password
changing the <useSecurity>true</useSecurity> to <useSecurity>false</useSecurity> will not be enough, you should remove <authorizationStrategy> and <securityRealm> elements too and restart your jenkins server by doing sudo service jenkins restart .
remember this, set <usesecurity> to false only may cause a problem for you, since these instructions are mentioned in thier official documentation here.
Jenkins over KUBENETES and Docker
In case of Jenkins over a container managed by a Kubernetes POD is a bit more complex since: kubectl exec PODID --namespace=jenkins -it -- /bin/bash will you allow to access directly to the container running Jenkins, but you will not have root access, sudo, vi and many commands are not available and therefore a workaround is needed.
Use kubectl describe pod [...] to find the node running your Pod and the container ID (docker://...)
SSH into the node
run docker exec -ti -u root -- /bin/bash to access the container with Root privileges
apt-get update
sudo apt-get install vim
The second difference is that the Jenkins configuration file are placed in a different path that corresponds to the mounting point of the persistent volume, i.e. /var/jenkins_home, this location might change in the future, check it running df.
Then disable security - change true to false in /var/jenkins_home/jenkins/config.xml file.
<useSecurity>false</useSecurity>
Now it is enough to restart the Jenkins, action that will cause the container and the Pod to die, it will created again in some seconds with the configuration updated (and all the chance like vi, update erased) thanks to the persistent volume.
The whole solution has been tested on Google Kubernetes Engine.
UPDATE
Notice that you can as well run ps -aux the password in plain text is shown even without root access.
jenkins#jenkins-87c47bbb8-g87nw:/$ps -aux
[...]
jenkins [..] -jar /usr/share/jenkins/jenkins.war --argumentsRealm.passwd.jenkins=password --argumentsRealm.roles.jenkins=admin
[...]
Easy way out of this is to use the admin psw to login with your admin user:
Change to root user: sudo su -
Copy the password: xclip -sel clip < /var/lib/jenkins/secrets/initialAdminPassword
Login with admin and press ctrl + v on password input box.
Install xclip if you don't have it:
$ sudo apt-get install xclip
Using bcrypt you can solve this issue. Extending the #Reem answer for someone who is trying to automate the process using bash and python.
#!/bin/bash
pip install bcrypt
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum -y install xmlstarlet
cat > /tmp/jenkinsHash.py <<EOF
import bcrypt
import sys
if not sys.argv[1]:
sys.exit(10)
plaintext_pwd=sys.argv[1]
encrypted_pwd=bcrypt.hashpw(sys.argv[1], bcrypt.gensalt(rounds=10, prefix=b"2a"))
isCorrect=bcrypt.checkpw(plaintext_pwd, encrypted_pwd)
if not isCorrect:
sys.exit(20);
print "{}".format(encrypted_pwd)
EOF
chmod +x /tmp/jenkinsHash.py
cd /var/lib/jenkins/users/admin*
pwd
while (( 1 )); do
echo "Waiting for Jenkins to generate admin user's config file ..."
if [[ -f "./config.xml" ]]; then
break
fi
sleep 10
done
echo "Admin config file created"
admin_password=$(python /tmp/jenkinsHash.py password 2>&1)
# Repalcing the new passowrd
xmlstarlet -q ed --inplace -u "/user/properties/hudson.security.HudsonPrivateSecurityRealm_-Details/passwordHash" -v '#jbcrypt:'"$admin_password" config.xml
# Restart
systemctl restart jenkins
sleep 10
I have kept password hardcoded here but it can be a user input depending upon the requirement. Also make sure to add that sleep otherwise any other command revolving around Jenkins will fail.
To very simply disable both security and the startup wizard, use the JAVA property:
-Djenkins.install.runSetupWizard=false
The nice thing about this is that you can use it in a Docker image such that your container will always start up immediately with no login screen:
# Dockerfile
FROM jenkins/jenkins:lts
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
Note that, as mentioned by others, the Jenkins config.xml is in /var/jenkins_home in the image, but using sed to modify it from the Dockerfile fails, because (presumably) the config.xml doesn't exist until the server starts.
I will add some improvements based on the solution:
https://stackoverflow.com/a/51255443/5322871
On my scenario it was deployed with Swarm cluster with nfs volume, in order to perform the password reset I did the following:
Attach to the pod:
$ docker exec -it <pod-name> bash
Generate the hash password with python (do not forget to specify the letter b outside of your quoted password, the method hashpw requires a parameter in bytes):
$ pip install bcrypt
$ python
>>> import bcrypt
>>> bcrypt.hashpw(b"yourpassword", bcrypt.gensalt(rounds=10, prefix=b"2a"))
'YOUR_HASH'
Once inside the container find all the config.xml files:
$ find /var/ -type f -iname "config.xml"
Once identified, modify value of the field ( on my case the config.xml was in another location):
$ vim /var/jenkins_home/users/admin_9482805162890262115/config.xml
...
<passwordHash>#jbcrypt:YOUR_HASH</passwordHash>
...
Restart the service:
docker service scale <service-name>=0
docker service scale <service-name>=1
Hope this can be helpful for anybody.
I had a similar issue, and following reply from ArtB,
I found that my user didn't have the proper configurations. so what I did:
Note: manually modifying such XML files is risky. Do it at your own risk. Since I was already locked out, I didn't have much to lose. AFAIK Worst case I would have deleted the ~/.jenkins/config.xml file as prev post mentioned.
**> 1. ssh to the jenkins machine
cd ~/.jenkins (I guess that some installations put it under /var/lib/jenkins/config.xml, but not in my case )
vi config.xml, and under authorizationStrategy xml tag, add the below section (just used my username instead of "put-your-username")
restart jenkins. in my case as root service tomcat7 stop; ; service tomcat7 start
Try to login again. (worked for me)**
under
add:
<permission>hudson.model.Computer.Build:put-your-username</permission>
<permission>hudson.model.Computer.Configure:put-your-username</permission>
<permission>hudson.model.Computer.Connect:put-your-username</permission>
<permission>hudson.model.Computer.Create:put-your-username</permission>
<permission>hudson.model.Computer.Delete:put-your-username</permission>
<permission>hudson.model.Computer.Disconnect:put-your-username</permission>
<permission>hudson.model.Hudson.Administer:put-your-username</permission>
<permission>hudson.model.Hudson.ConfigureUpdateCenter:put-your-username</permission>
<permission>hudson.model.Hudson.Read:put-your-username</permission>
<permission>hudson.model.Hudson.RunScripts:put-your-username</permission>
<permission>hudson.model.Hudson.UploadPlugins:put-your-username</permission>
<permission>hudson.model.Item.Build:put-your-username</permission>
<permission>hudson.model.Item.Cancel:put-your-username</permission>
<permission>hudson.model.Item.Configure:put-your-username</permission>
<permission>hudson.model.Item.Create:put-your-username</permission>
<permission>hudson.model.Item.Delete:put-your-username</permission>
<permission>hudson.model.Item.Discover:put-your-username</permission>
<permission>hudson.model.Item.Read:put-your-username</permission>
<permission>hudson.model.Item.Workspace:put-your-username</permission>
<permission>hudson.model.Run.Delete:put-your-username</permission>
<permission>hudson.model.Run.Update:put-your-username</permission>
<permission>hudson.model.View.Configure:put-your-username</permission>
<permission>hudson.model.View.Create:put-your-username</permission>
<permission>hudson.model.View.Delete:put-your-username</permission>
<permission>hudson.model.View.Read:put-your-username</permission>
<permission>hudson.scm.SCM.Tag:put-your-username</permission>
Now, you can go to different directions. For example I had github oauth integration, so I could have tried to replace the authorizationStrategy with something like below:
Note:, It worked in my case because I had a specific github oauth plugin that was already configured. So it is more risky than the previous solution.
<authorizationStrategy class="org.jenkinsci.plugins.GithubAuthorizationStrategy" plugin="github-oauth#0.14">
<rootACL>
<organizationNameList class="linked-list">
<string></string>
</organizationNameList>
<adminUserNameList class="linked-list">
<string>put-your-username</string>
<string>username2</string>
<string>username3</string>
<string>username_4_etc_put_username_that_will_become_administrator</string>
</adminUserNameList>
<authenticatedUserReadPermission>true</authenticatedUserReadPermission>
<allowGithubWebHookPermission>false</allowGithubWebHookPermission>
<allowCcTrayPermission>false</allowCcTrayPermission>
<allowAnonymousReadPermission>false</allowAnonymousReadPermission>
</rootACL>
</authorizationStrategy>
Edit the file $JENKINS_HOME/config.xml and change de security configuration with this:
<authorizationStrategy class="hudson.security.AuthorizationStrategy$Unsecured"/>
After that restart Jenkins.
A lot of times you wont be having permissions to edit the config.xml file.
The simplest thing would be to take a back of config.xml and delete using sudo command.
Restart the jenkins using the command sudo /etc/init.d/jenkins restart
This will disable all the security in the Jenkins and the login option would disappear
For one who is using macOS, the new version just can be installed by homebrew. so for resting, this command line must be using:
brew services restart jenkins-lts
The directory where the file is located config.xml in windows
C:\Windows\System32\config\systemprofile\AppData\Local\Jenkins\.jenkins

Resources