Currently, I am accessing AWS parameter store value as environment variable. It is defined in serverless yml like so:
environment:
XYZ_CREDS: ${ssm:xyzCreds}
In code, I access this like so process.env.XYZ_CREDS
I need to move this value to AWS secret manager and access the xyzCreds in the same way.
Based on the serverless document I tried like so -
custom:
xyzsecret: ${ssm:/aws/reference/secretsmanager/XYZ_CREDS_SECRET_MANAGERa~true}
environment:
XYZ_CREDS: ${self:custom.xyzsecret}}
But it's not working. Please help!
After struggling with this issue by myself I found the solution that worked for me.
Assume that we have a secret XYZ_CREDS where we store user and password ket-value pairs. AWS Secrets manager stores them in JSON format: {"user": "test", "password": "xxxx"}
Here is how to put user and password into Lambda function environment variables:
custom:
xyzsecret: ${ssm:/aws/reference/secretsmanager/XYZ_CREDS~true}
myService:
handler: index.handler
environment:
username: ${self:custom.xyzsecret.user}
password: ${self:custom.xyzsecret.password}
I'm using serverless 1.73.1 for deploying to cloudformation.
Hope this helps others.
Given that the name of your secret in secrets manager is correct. I think you might have an "a" after manager before the decryption.
Secret manager stores in key value/json format.So specify the variables individually
Eg.
environment:
user_name: ${self:custom.xyzsecret}.username
password: ${self:custom.xyzsecret}.password
otherwise pass secret manager name and decrypt using aws-sdk in the code
environment:
secretkey_name:XYZ_CREDS_SECRET_MANAGERa
Related
When using the GUI on Google Cloud Console to create a secret all I needed to provide was the secret name and it's value and I'm done.
However, I would like to use the gcloud cli to create simple string secrets.
So far, all the documentations in the docs keep mentioning --data-file like below:
gcloud secrets create sample-secret --data-file="/path/to/file.txt"
How can I use simple strings as the secret value similar to the GUI flow such that I can have a command like
gcloud secrets create apiKey "adadadad181718783"
Must it always be a file?
you could try with this sample command
printf "s3cr3t" | gcloud secrets create my-secret --data-file=-
setting the --data-file=- flag to "-" will read the secret data from stdin.
You can check this documentation for reference
I'm new to deployment/securing keys, and I'm not sure how to securely store the google-cloud-auth.json (auth required for creating the API client) outside of source code to prevent leaking credentials.
I've currently secured my API keys and tokens in my app.yaml file specifying them as environmental variables which successfully work as expected and shown below.
accessruntime: nodejs10
env_variables:
SECRET_TOKEN: "example"
SECRET_TOKEN2: "example2"
However my google-cloud-auth.json is kept as its own file since the parameter used for creating the client requires a path string.
const {BigQuery} = require('#google-cloud/bigquery');
...
const file = "./google-cloud-auth.json";
// Creates a BigQuery client
const bigquery = new BigQuery({
projectId: projectId,
datasetId: datasetId,
tableId: tableId,
keyFilename: file
});
According to the Setting Up Authentication for Server to Server Production Applications:
GCP client libraries will make use of the ADC (Application Default Credentials) to find the credentials meant to be used by the app.
What ADC does is basically to check if the GOOGLE_APPLICATION_CREDENTIALS env variable is set with the path to a service account file.
In case the env variable is not set, ADC will use the default service account provided by App Engine.
With this information I can suggest a couple of solutions to provide these credentials safely:
If you require to use a specific service account, set the path to the file with the GOOGLE_APPLICATION_CREDENTIALS. This section explains how to do that.
If you are not a fan of moving credential files around, I would suggest trying to use the default service account provided by the App Engine.
I just created a new project and deployed a basic app by mixing these 2 guides:
BigQuery Client Libraries
Quickstart for Node.js in the App Engine Standard Environment
My app.yaml had nothing more than the runtime: nodejs10 line, and I was still able to query through the BigQuery client library, using the default service account.
This account comes with the Project/Editor role and you can add any additional roles you need.
TL;DR: Is there a way to set app client custom scopes via cli or sdk?
I'm trying to automate my Cognito deployment with CloudFormation. I've already made some custom resources since not everything is supported. For this I'm using the AWS JS SDK. I want to set 'Allowed Custom Scopes' for the app clients in a specific user pool. However, I am unable to find how to do this in any documentation AWS provides. The CLI docs say only this on there docs here Cognito-user-identity docs:
AllowedOAuthScopes
A list of allowed OAuth scopes. Currently supported values are "phone", "email", "openid", and "Cognito".
The scopes mentioned there are default scopes that are always available in user pool. But I also use custom scopes that are provided by a Custom Resource Server I've defined. Those look like: resourceServer.com/scope. I can't find any docs about setting those scopes.
So, is there a way to set custom scopes via cli or sdk?
Custom Scope is supported on AllowedOAuthScopes field.
Documentation: https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html#CognitoUserPools-CreateUserPoolClient-request-AllowedOAuthScopes
To update userpool client via CLI: https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/update-user-pool-client.html
(check out the --allowed-o-auth-scopes option)
See example cloudformation below
UserPoolResourceServer:
Type: AWS::Cognito::UserPoolResourceServer
Properties:
Identifier: users
Name: User API
UserPoolId: !Ref UserPool
Scopes:
- ScopeName: "write"
ScopeDescription: "Write access"
- ScopeName: "read"
ScopeDescription: "Read access"
UserPoolClientAdmin:
Type: "AWS::Cognito::UserPoolClient"
Properties:
AllowedOAuthFlows:
- client_credentials
AllowedOAuthFlowsUserPoolClient: true
AllowedOAuthScopes:
- users/read
- users/write
For anyone coming here looking for a solution, please follow #JohnPauloRodriguez's sample template. But you might need to add DependsOn attribute key in the UserPoolClient template for it work.
The reason being, first the Resource Server with these custom scopes should exist, then only we can refer to them in the client. As per the Cloud Formation Docs:
With the DependsOn attribute you can specify that the creation of a
specific resource follows another. When you add a DependsOn attribute
to a resource, that resource is created only after the creation of the
resource specified in the DependsOn attribute.
So the template for UserPoolClient will become:
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
DependsOn: UserPoolResourceServer
Properties:
UserPoolId: !Ref UserPool
AllowedOAuthFlowsUserPoolClient: true
AllowedOAuthFlows:
- code
AllowedOAuthScopes:
- users/read
- users/write
I need to sftp into an amazon ec2 instance to send files from data farmed from a firestore data base. I'm trying to open the connection but I need to have access to the ec2 secret key file in the cloud functions.
I've done slightly similar things such as with stripe and the secret key so I believe this should be possible. How do I upload my secret key file so I can have access to in the function below?
return sftp.connect({
host: 'xxxxxxxxxxxx',
port: 'xxxx',
username: 'xxxxxx',
privatekey: 'filepath'
})
I simply put the secret key in the main directory and read it into the environment with
var privateKey = require('fs').readFileSync('./xxxxxxx.pem', {'encoding':'utf8'});
I may ask another question later to see if this is secure but I don't see why not.
I'm trying to access AWS Glacier (from the command line on Ubuntu 14.04) using something like:
aws glacier list-vaults -
rather than
aws glacier list-vaults --account-id 123456789
The documentation suggests that this should be possible:
You can specify either the AWS Account ID or optionally a '-', in
which case Amazon Glacier uses the AWS Account ID associated with the
credentials used to sign the request.
Unless "credentials used to sign the request" means that I have to explicitly include credentials in the command, rather than rely on my .aws/credentials file, I would expect this to work. Instead, I get:
aws: error: argument --account-id is required
Does anyone have any idea how to solve this?
The - is supposed to be passed as the value of --account-id, so like
aws glacier list-vaults --account-id -
--account-id is in fact a required option.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/glacier/list-vaults.html
Says that "--account-id" is a required parameter for the glacier section of the full aws api. A little wierd, but documented. So yay.