I am trying to make a connection to MongoDB with a dynamically created username and password in node-vault.
For eg. in https://www.vaultproject.io/docs/secrets/databases/mongodb.html
Vault docs, there we create dynamic username and password to log in like:
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9ad944a8d4de6
lease_duration 1h
lease_renewable true
password 8cab931c-d62e-a73d-60d3-5ee85139cd66
username v-root-e2978cd0-
How can I have this behaviour using node-vault so that I can access MongoDB?
I did this using their go http client. Since node-vault is also http client using node.js. So i think procedure will be same.
First enable database(if it not enabled).
$ vault secrets enable database.
api for this: https://www.vaultproject.io/api/system/mounts.html#enable-secrets-engine.
Write mongodb config
$ vault write database/config/my-mongodb-database \
plugin_name=mongodb-database-plugin \
allowed_roles="my-role" \
connection_url="mongodb://{{username}}:{{password}}#mongodb.acme.com:27017/admin?ssl=true" \
username="admin" \
password="Password!"
api for this: https://www.vaultproject.io/api/secret/databases/mongodb.html#configure-connection
Configure a role to create the database credential
$ vault write database/roles/my-role \
db_name=my-mongodb-database \
creation_statements='{ "db": "admin", "roles": [{ "role": "readWrite" }, {"role": "read", "db": "foo"}] }' \
default_ttl="1h" \
max_ttl="24h"
api for this: https://www.vaultproject.io/api/secret/databases/index.html#create-role
Generate a new credential by reading from the /creds endpoint
$ vault read database/creds/my-role
api for this: https://www.vaultproject.io/api/secret/databases/index.html#generate-credentials
Here https://github.com/kr1sp1n/node-vault/blob/master/example/mount_postgresql.js they do quite similar thing for postgreSQL in node-vault github repo.
Related
I'm trying to use the user token (gcloud auth print-access-token) to leverage getting logs using #google-cloud/logging.
Basic code snippet:
import { GetEntriesResponse, Logging } from "#google-cloud/logging";
export async function readLogsAsync() {
const logging = new Logging();
const [entries, _, response]: GetEntriesResponse = await logging.getEntries({
filter: 'resource.type="k8s_container"',
resourceNames: 'projects/acme',
autoPaginate: false,
pageSize: 100
});
console.log(entries.length, response.nextPageToken)
}
Similar call in http works with the user token, so I'm assuming there must be some way to communicate it with the node.js api, but I couldn't find an example or documentation.
curl --location --request POST 'https://logging.googleapis.com/v2/entries:list?alt=json' \
--header "Authorization: Bearer $(gcloud auth print-access-token)" \
--header 'Content-Type: application/json' \
--data-raw '{
"filter": "resource.type=\"k8s_container\"",
"orderBy": "timestamp desc",
"pageSize": 100,
"resourceNames": [
"projects/amce"
]
}'
You're not authenticating the code.
Google's libraries support a mechanism called Application Default Credentials (ADC).
ADC makes it easy to authenticate code deployed to Google Cloud Platform and when running locally. To run locally, you will need to export GOOGLE_APPLICATION_CREDENTIALS with the path to a Service Account key.
As #guillaume-blaquiere comments, you can effect this behavior using gcloud auth application-default but it is better to use a Service Account.
i'm trying to use Velero to backup an AKS cluster but for some reason i'm unable to set the backup location in velero.
i'm getting the error below
I can confirm the credentials-velero file I have obtains the correct storage access key, and the secret (cloud-credentials) reflects it as well.
Kind of at a lost as to why it's throwing me this error. Never used Velero before.
EDIT:
So I used the following commands to get the credential file:
Obtain the Azure Storage account access key
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name storsmaxdv --query "[?keyName == 'key1'].value" -o tsv`
then I create the credential file
cat << EOF > ./credentials-velero
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=${AZURE_STORAGE_ACCOUNT_ACCESS_KEY}
AZURE_CLOUD_NAME=AzurePublicCloud
EOF
then my install command is:
./velero install \
--provider azure
--plugins velero/velero-plugin-for-microsoft-azure:v1.3.0 \
--bucket velero \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
--use-volume-snapshots=false
I can verify Velero created a secret called cloud-credentials, and when I decrypt it with base64 I'm able to see what looks like the contents of my credentials-velero file. for example:
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY
AZURE_CLOUD_NAME=AzurePublicCloud
turns out it was the brackets in the install command that was causing the issue
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
removed the brackets to this:
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY,subscriptionId=numbersandlettersandstuff \
and now it works
not sure how your cred file formatting is and the command you are running.
Please try the below file and update the command as per need.
Example command :
./velero install --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.0.1 --bucket velero-cluster-backups --backup-location-config resourceGroup=STORAGE-ACCOUNT-RESOURCEGROUP,storageAccount=STORAGEACCOUNT --use-volume-snapshots=false --secret-file ./credentials-velero
Cred file
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY
AZURE_CLOUD_NAME=AzurePublicCloud
i would suggest checking out the secret that is getting created into the K8s cluster and check the formatting of that secret and data.
Refer more here : https://github.com/vmware-tanzu/velero/issues/2272
Check this plugin : https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure
1: Create servce pricple for velero in azure ad
you can create credential file in below format
AZURE_CLOUD_NAME=AzurePublicCloud
AZURE_SUBSCRIPTION_ID=*************
AZURE_TENANT_ID=**************
AZURE_CLIENT_ID=********
AZURE_CLIENT_SECRET=**********
AZURE_RESOURCE_GROUP=(name of your cluster resorce group where your pvc reside)
Using the non-dev vault server, I went ahead and used “Enable new engine” in the UI for kv version 1 and created a secret.
As a test, I am using a token with root permissions to attempt the following and receive the no route error:
curl -H "X-Vault-Token: " -X GET https://vaultwebsite.com/v1/secret/kvtest1/test12/test123
{“errors”:[“no handler for route ‘secret/kvtest/anothertest/test’”]}
My understanding is that there shouldn’t be the no handler issue as I enabled that secrets engine through the UI. Am I missing a step, policy, or is this an API path issue?
One of my references was https://www.reddit.com/r/hashicorp/comments/c429fo/simple_vault_workflow_help/ which lead me to review the enabled amount.
My guess is that you've enabled a KV engine and wrote a secret to it, but the path secret/ is wrong.
For example, if i enable an engine and then try to read an existing value, then it works
$ vault secrets enable -version=1 -path kv kv
Success! Enabled the kv secrets engine at: kv/
$ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/kv/foo
{"request_id":"2db249af-10de-01c5-4603-8f89a46897b5","lease_id":"","renewable":false,"lease_duration":2764800,"data":{"v6":"1"},"wrap_info":null,"warnings":null,"auth":null}
But if i now try to read from a non existing path, i'd get the same error as you, for example
$ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/foobar/foo
{"errors":["no handler for route 'foobar/foo'"]}
It would help if you list your existing mounts and verify the path
$ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/sys/mounts
# or
$ vault secrets list
I was getting the same error when I was trying to access to a wrong path. For example, I enabled the ldap engine with specific path -path=ssh-client-signer
vault secrets enable -path=ssh-client-signer ssh
So the actual url for ssh sign should be ssh-client-signer/roles, no ssh/roles
curl \
--header "X-Vault-Token: ......" \
--request LIST \
http://vault:8200/v1/ssh-client-signer/roles
I can't find a way to list IAM users with the following info:
Username
Key age
Password age
Last login
MFA Enabled
last use
key Active?
I have tried aws iam list-users but that doesn't tell me much.
Is this possible using the AWS CLI? If so, how?
I will put in an answer, since 4 people have voted, unfairly I think, to close the question.
The short answer is, no, there's no one command you can use to do this, and I can understand why that's confusing and surprising.
Some of this info can be found in the credential report using:
aws iam generate-credential-report
aws iam get-credential-report
See the docs for how to programmatically obtain the credentials report (ref).
From there you can get:
mfa_active
access_key_1_active
access_key_1_last_used_date
access_key_1_last_rotated
password_last_used
password_last_changed
Some other info can be found in the list-access-keys subcommand:
▶ aws iam list-access-keys --user-name alex
{
"AccessKeyMetadata": [
{
"UserName": "alex",
"Status": "Active",
"CreateDate": "XXXX-XX-XXT01:33:31Z",
"AccessKeyId": "XXXXXXXX"
}
]
}
Thus, you can get the "Status" and "CreateDate" from here too using commands like:
aws iam list-access-keys --user-name alex \
--query "AccessKeyMetadata[].CreateDate" \
--output text
More info again can be found in:
▶ aws iam get-login-profile --user-name alex
{
"LoginProfile": {
"UserName": "alex",
"CreateDate": "XXXX-XX-XXT01:33:31Z",
"PasswordResetRequired": false
}
}
You can also get the access key last used date this way:
access_key_id=$(aws iam list-access-keys \
--user-name alex \
--query "AccessKeyMetadata[].AccessKeyId" \
--output text)
aws iam get-access-key-last-used \
--access-key-id $access_key_id
For example of output:
{
"UserName": "alex",
"AccessKeyLastUsed": {
"Region": "XXXXXX",
"ServiceName": "iam",
"LastUsedDate": "XXXX-XX-XXT05:28:00Z"
}
}
I think that covers all the fields you asked about. Obviously, you would need to write a bit of code around all this to get it all together.
I'm using the RESTful API to communicate to the ledger. I've added some protection to the API by using Passport.
Now I'd like to issue an identity to a specific participant in the network. The CLI command works just fine.
composer identity issue -n 'epd' -i admin -s adminpw -u "myid" -a "nl.epd.blockchain.Patient#myid"
But whenever I try to use the RESTful API call it keeps saying:
No enrollment ID or enrollment secret has been provided
The payload I am sending looks like the following
{
"participant": "nl.epd.blockchain.Patient#myid",
"userID": "myid",
"options": {
"enrollmentID" : "admin",
"enrollmentSecret" : "adminpw"
}
}
To startup the REST server I use the following code:
composer-rest-server -n epd -p defaultProfile -i admin -s adminpw -N never -P 3000 -S true
So I guess my payload is incorrect because it can't find the enrollmentid and secret. So what's the correct format for the payload?
You don't need to put the enrollmentID and enrollmentSecret as part of the payload. Those get passed in via the composer-rest-server.
Here are some instructions on enabling REST authentication for a business network. https://hyperledger.github.io/composer/integrating/enabling-rest-authentication.html
I think the step you are missing is Adding a Blockchain identity to the default wallet