I have multiple roles. Each of them has it's own vault encrypted by a unique pass. I include vault in each role by using: include_vars: vars/encrypted.yml in playbook tasks. To be able to decypt the data I have to put each VaultID to ansible.cfg or use --vault-id.
Ansible asks for password for EVERY VaultID referenced even if it will not be used eventually. Thus if I run a single role I have to edit either ansible.cfg or cmd line parameters to reference only necessary VaultIDs every time.
How do I dynamically ask for passwords only for required roles? Maybe I can use Ansible Prompt module to ask the password and somehow declare the VaultID before I use include_vars module?
P. S.: I cannot store passwords in files due to security concerns.
Related
I want to be able to add ssh keys to my GitHub account from a bash script (using curl).
The documentation points that the best way to authenticate is by using a personal access token.
Following the steps to create a token, I need to specify which scopes it will give me access too.
In respect of the "principle of least privilege", I would like it to permit only to add new ssh keys (write).
However, I am not sure which category this fall under. I looked at the documentation of scopes, but could not find any reference to "ssh keys".
Which scope(s) should I use?
The section admin:public_key is what I was looking for.
In particular the scope write:public_key.
I'm writing a script to automatically rotate AWS Access Keys on Developer laptops. The script runs in the context of the developer using whichever profile they specify from their ~/.aws/credentials file.
The problem is if they have two API keys associated with their IAM User account, I cannot create a new key pair until I delete an existing one. However, if I delete whichever key the script is using (which is probably from the ~/.aws/credentials file, but might be from Environment variables of session tokens or something), the script won't be able to create a new key. Is there a way to determine what AWS Access Key ID is being used to sign boto3 API calls within python?
My fall back is to parse the ~/.aws/credentials file, but I'd rather a more robust solution.
Create a default boto3 session and retrieve the credentials:
print(boto3.Session().get_credentials().access_key)
That said, I'm not necessarily a big fan of the approach that you are proposing. Both keys might legitimately be in use. I would prefer a strategy that notified users of multiple keys, asked them to validate their usage, and suggest they deactivate or delete keys that are no longer in use.
You can also use IAM's get_access_key_last_used() to retrieve information about when the specified access key was last used.
Maybe it would be reasonable to delete keys that are a) inactive and b) haven't been used in N days, but I think that's still a stretch and would require careful handling and awareness among your users.
The real solution here is to move your users to federated access and 100% use of IAM roles. Thus no long-term credentials anywhere. I think this should be the ultimate goal of all AWS users.
Setting
I have a terraform blueprint that has some curl commands in a remote exec
The curl commands needs a user name and password.
So, in Terraform, I had a variable without a default value that asked the user for their username and password. Then the curl command would use those inputted values at runtime. This way I don't store usernames and passwords on github.
But, I think this may be insecure. Wouldn't the username and password now be stored in the terraform state file?
Question
How should I use terraform to execute a curl command that requires sensitive information?
If you're doing something in a provisioner then that's not stored in the state file because Terraform makes no attempt to track what happens there. You should probably consider your state files sensitive anyway for when you have resources that store sensitive state information (such as database passwords) and think about how to encrypt/manage access to them.
I am attempting to integrate a standalone product into an LDAP environment.
I have a RHEL 6.7 system that is configured for ldap authentication (via sss) that I need to programmatically add local users and groups to.
The input xml file has a list of users and groups with their group membership, login shell, user id and group id that should be used.
Now comes the problem, I have a Perl script that uses the XML file to configure the users and groups, it uses the getgrnam and getpwnam to query for users and groups then makes a system call to groupmod/groupadd and usermod/useradd if the user exists or not. I found that if LDAP has a group the same name as the group I am trying to create my script will see the group as existing and jump to the groupmod instead of groupadd. Then group binaries will only perform operations on local groups, and fail because the group doesn't exist locally. NSS is setup to check files then sss, which make sense why getgrnam returns the ldap group.
Is there a way to have getgrnam and getpwnam only query the local system without having to reconfigure nsswitch.conf and possibly stop/start SSSD when I run the script?
Is there another perl function I can use to query only local users/groups?
Short answer is no - the purpose of those function calls is to make the authentication mechanisms transparent. There's a variety of things you could be using, and no one wants to hand roll their own local files/ldap/yp/nis+/arbitrary PAM authentication mechanism.
If you're specifically interested in the contents of the local passwd and group files, I'd suggest the answer is - read those directly.
I have a sudo account (not root) on several CentOS servers. We would like to share the cluster with other uses who do not have an account for research purpose. (By share I mean users can reserve a time slot to use the cluster exclusively.) But setup an account in the OS for each user is too annoying. Is there a good way to grant them authority to read/write/execute their own files during a certain period of time? I am thinking something like temporary username and password that they can use to login through some interface (like a webserver) I offered. And the username and password will expire after when their reservation. Any idea?
You can share your unix user account among several users, by using SSH key authentication.
In a nutshell, each user generates a public/private key combination. The allowed public keys are then listed in the following file on the shared unix acount:
$HOME/.ssh/authorized_keys
I'm not aware of a mechanism to control when users are allowed to login. Presumably one could have a cronjob that swaps different versions of the authorized_keys file, dependent on the time of day. (Seems like over engineering the solution, users can easily over-ride this kind of restriction).
Articles:
http://wiki.centos.org/HowTos/Network/SecuringSSH
http://www.ualberta.ca/CNS/RESEARCH/LinuxClusters/pka-putty.html