Setting
I have a terraform blueprint that has some curl commands in a remote exec
The curl commands needs a user name and password.
So, in Terraform, I had a variable without a default value that asked the user for their username and password. Then the curl command would use those inputted values at runtime. This way I don't store usernames and passwords on github.
But, I think this may be insecure. Wouldn't the username and password now be stored in the terraform state file?
Question
How should I use terraform to execute a curl command that requires sensitive information?
If you're doing something in a provisioner then that's not stored in the state file because Terraform makes no attempt to track what happens there. You should probably consider your state files sensitive anyway for when you have resources that store sensitive state information (such as database passwords) and think about how to encrypt/manage access to them.
Related
I'm using GitLab Enterprise Edition 14.6.5-ee
I want to create a Git tag automatically when I merge a branch back to master. I'm fine with the actual Git commands; the problem is with the authentication: the build bot doesn't know how to authenticate back to the server. There's an answer here how to set up SSH keys. But this requires me to use my personal credentials, which is just wrong, because it's not me creating the tag; it's the build bot.
Seriously, it just doesn't make sense to say that the bot doesn't know how to authenticate. I mean, it just pulled the freakin' code from the repo! So why is it such a big leap from being able to pull code to being able to push code?
Any ideas how to automate the creation of tags without using my personal credentials?
CI jobs do have a builtin credential token for accessing the repository: the $CI_JOB_TOKEN variable. However this token only has read permissions, so it won't be able to create tags. To write to the repository or API, you'll have to supply a token or SSH key to the job. However, this doesn't necessarily have to be your personal token.
There are a few ways you can authenticate to write to the project without using a personal credential:
You can use project access tokens
You can use group access tokens -- these are only exposed in the UI after GitLab 14.7
You can use deploy SSH keys (when you grant read-write to the key)
So why is it such a big leap from being able to pull code to being able to push code?
This is probably a good thing. While it may require you to do extra work in this case, the builtin job authorization tries to apply the principle of least privilege. Many customers have even argued that the existing CI_JOB_TOKEN permissions are too permissive because they allow access to read other projects!
In any case, it is on GitLab's roadmap to make these permissions more controllable and flexible :-)
Alternatively, use releases
If you don't mind creating a release in addition to a tag, you could also use the release: keyword in the CI yaml as an easy way to create the tag.
It's somewhat of an irony that the releases API allows you to use the builtin CI_JOB_TOKEN to create releases (and presumably tags) but you cannot (as far as I know) use CI_JOB_TOKEN on the tags API to create a tag.
However, in this case, it will still have the effect that the releases/tag appear to be created by you.
I have multiple roles. Each of them has it's own vault encrypted by a unique pass. I include vault in each role by using: include_vars: vars/encrypted.yml in playbook tasks. To be able to decypt the data I have to put each VaultID to ansible.cfg or use --vault-id.
Ansible asks for password for EVERY VaultID referenced even if it will not be used eventually. Thus if I run a single role I have to edit either ansible.cfg or cmd line parameters to reference only necessary VaultIDs every time.
How do I dynamically ask for passwords only for required roles? Maybe I can use Ansible Prompt module to ask the password and somehow declare the VaultID before I use include_vars module?
P. S.: I cannot store passwords in files due to security concerns.
I want to be able to add ssh keys to my GitHub account from a bash script (using curl).
The documentation points that the best way to authenticate is by using a personal access token.
Following the steps to create a token, I need to specify which scopes it will give me access too.
In respect of the "principle of least privilege", I would like it to permit only to add new ssh keys (write).
However, I am not sure which category this fall under. I looked at the documentation of scopes, but could not find any reference to "ssh keys".
Which scope(s) should I use?
The section admin:public_key is what I was looking for.
In particular the scope write:public_key.
I'm writing a script to automatically rotate AWS Access Keys on Developer laptops. The script runs in the context of the developer using whichever profile they specify from their ~/.aws/credentials file.
The problem is if they have two API keys associated with their IAM User account, I cannot create a new key pair until I delete an existing one. However, if I delete whichever key the script is using (which is probably from the ~/.aws/credentials file, but might be from Environment variables of session tokens or something), the script won't be able to create a new key. Is there a way to determine what AWS Access Key ID is being used to sign boto3 API calls within python?
My fall back is to parse the ~/.aws/credentials file, but I'd rather a more robust solution.
Create a default boto3 session and retrieve the credentials:
print(boto3.Session().get_credentials().access_key)
That said, I'm not necessarily a big fan of the approach that you are proposing. Both keys might legitimately be in use. I would prefer a strategy that notified users of multiple keys, asked them to validate their usage, and suggest they deactivate or delete keys that are no longer in use.
You can also use IAM's get_access_key_last_used() to retrieve information about when the specified access key was last used.
Maybe it would be reasonable to delete keys that are a) inactive and b) haven't been used in N days, but I think that's still a stretch and would require careful handling and awareness among your users.
The real solution here is to move your users to federated access and 100% use of IAM roles. Thus no long-term credentials anywhere. I think this should be the ultimate goal of all AWS users.
I am attempting to integrate a standalone product into an LDAP environment.
I have a RHEL 6.7 system that is configured for ldap authentication (via sss) that I need to programmatically add local users and groups to.
The input xml file has a list of users and groups with their group membership, login shell, user id and group id that should be used.
Now comes the problem, I have a Perl script that uses the XML file to configure the users and groups, it uses the getgrnam and getpwnam to query for users and groups then makes a system call to groupmod/groupadd and usermod/useradd if the user exists or not. I found that if LDAP has a group the same name as the group I am trying to create my script will see the group as existing and jump to the groupmod instead of groupadd. Then group binaries will only perform operations on local groups, and fail because the group doesn't exist locally. NSS is setup to check files then sss, which make sense why getgrnam returns the ldap group.
Is there a way to have getgrnam and getpwnam only query the local system without having to reconfigure nsswitch.conf and possibly stop/start SSSD when I run the script?
Is there another perl function I can use to query only local users/groups?
Short answer is no - the purpose of those function calls is to make the authentication mechanisms transparent. There's a variety of things you could be using, and no one wants to hand roll their own local files/ldap/yp/nis+/arbitrary PAM authentication mechanism.
If you're specifically interested in the contents of the local passwd and group files, I'd suggest the answer is - read those directly.