I need to get the plaintext value for some 70+ secrets in my github organisation.
There is an endoint to get the list of secret names which i am currently using.
let secretsData = await octokit.request("GET /orgs/{org}/actions/secrets", {
org: process.env.org,
});
This gives back to me a list of action/secret keys, i need the plaintext value for each of these keys.
usecase:
I am migrating from github actions secrets to aws secret manager. So i need a nice way to get the keys/plaintext
Related
I have a pipeline in Azure Data Factory that starts by going to a REST API to obtain an authorization token. In order to obtain this token, the initial POST request needs to contain a username, password, and private key in the request body. It looks like this:
{
"Username": "<myusername>",
"Password": "<mypassword>",
"PrivateKey":"<privatekey>"
}
Currently I just have this stored as plain text in the Web activity in ADF
To me this doesn't seem very secure and I'm wondering if there is a better way to store this JSON string. I've looked into Azure Key Vault, but that seems to be for storing "data store" credentials.... What is the best practice for storing credentials like this to be used by ADF?
You can save the individual values as Secrets in Key vault and fetch them individually via Web activity from KeyVault with masked output thereby making your ADF secure.
Below GITHUb location contains the Pipeline JSON :
https://github.com/NandanHegde15/Azure-DataFactory-Generic-Pipelines/blob/main/Get%20Secret%20From%20KeyVault/Pipeline/GetSecretFromKeyVault.json
Other way would be to use SecureString Parameter
But would say to avoid using the parameter and leverage the Key Vault
the credentials can be saved in the key vault secret
The secret can be called for authentication in the linked service that connects to the required base url
Refer https://learn.microsoft.com/en-us/azure/data-factory/connector-http?tabs=data-factory#create-a-linked-service-to-an-http-source-using-ui
I have a secret personal access token (only for building purposes) in my .npmrc file. As this secret is exposed, I thought of replacing this using Azure Key Vault. I haven't found any documentation around it. When I created the personal token before, I had given it only packaging/building access. How can I achieve this, please help me with this? Or is there any better way to include the personal access token in the .npmrc file?
Since you confirmed you are using Azure DevOps for your build, you don't need to maintain PAT in the .npmrc file. Just keep your npm registry URL there (I assume the private npm registry is also in the Azure DevOps) like below:
registry={your npm registry URL}
always-auth=false
Now, in the build pipeline, add npm Authenticate task before npm install.
- task: npmAuthenticate#0
inputs:
workingFile: <relative path to your .npmrc file>
Providing secrets to your resource can be done in many ways.
Some resources in Azure allow you to specify environment variables through the Azure CLI. Here's an example with the Azure container instances: link.
On Azure, once you have a Key Vault instance, you can use your Key Vault to provide secrets to your App Service and Azure Function instances. This is documented here: link, with a focus for Azure Resource Manager templates, which is specially useful for automated deployments.
Although the following is explained in the documentation link above, the general picture on how to use Key Vault secrets from other Azure resources requires the following:
Make a user assigned identity or Azure Active Directory application.
Grant access to this identity (or AAD app) by going to the Access Policies of your Key Vault (this can be done through the portal, of course), and giving your identity at least read access to your Key Vault.
After that, create a secret on your Key Vault, go to the secret details and copy the "Secret Identifier". This will be a URI similar to: https://myvault.vault.azure.net/secrets/mysecret/.
That's the URI you can use to bring Key Vault secrets to other resources.
You'll be able to access this secret from other resources by ensuring the resource has access to the same identity, and by providing the URI through a syntax similar to: #Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/).
For example, if you link an Azure Function to the same identity you granted read access to your Key Vault, you can provide a secret through environment variables by setting configuration properties in your resource. By going to the Azure Portal, locating your resource, then going to Configuration, then to Application settings, if you proceed to add the name of your environment variable, and as the value something similar to: #Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/), you'll be providing the expected environment variable with the expected secret value to your resource.
The final approach I can think of is by using the #azure/keyvault-secrets client. If using an NPM library to retrieve Key Vault secrets sounds interesting, this is the dependency for you. All the information needed to work with this library should be available on NPM: same link. But in any case, a sample using this client would look as follows:
const { DefaultAzureCredential } = require("#azure/identity");
const { SecretClient } = require("#azure/keyvault-secrets");
const credential = new DefaultAzureCredential();
const client = new SecretClient(`https://my-key-vault.vault.azure.net`, credential);
async function main() {
const secretName = "MySecretName";
const latestSecret = await client.getSecret(secretName);
console.log(`Latest version of the secret ${secretName}: `, latestSecret);
}
main();
You could use this library to load your secrets at any point while your service or program is running.
Please let me know if this information is useful for you. I'm here to help!
I have added google cloud service account in a project and its working. But the problem is that after an hour(i think), I get this error:
The API returned an error: TypeError: source.hasOwnProperty is not a function
Internal Server Error
and I need to restart the application to make it work.
Here in this StackOverflow post, I found this:
Once you get an access token it is treated in the same way - and is
expected to expire after 1 hour, at which time a new access token will
need to be requested, which for a service account means creating and
signing a new assertion.
but didn't help.
I'm using Node js and amazon secret service:
the code I have used to authorize:
const jwtClient = new google.auth.JWT(
client_email,
null,
private_key,
scopes
);
jwtClient.authorize((authErr) =>{
if(authErr){
const deferred = q.defer();
deferred.reject(new Error('Google drive authentication error, !'));
}
});
Any idea?
hint: Is there any policy in AWS secret to access a secret or in google cloud to access a service account? for example access in local or online?
[NOTE: You are using a service account to access Google Drive. A service account will have its own Google Drive. Is this your intention or is your goal to share your Google Drive with the service account?]
Is there any policy in AWS secret to access a secret or in google
cloud to access a service account? for example access in local or
online?
I am not sure what you are asking. AWS has IAM policies to control secret management. Since you are able to create a Signed JWT from stored secrets, I will assume that this is not an issue. Google does not have policies regarding accessing service accounts - if you have the service account JSON key material, you can do whatever the service account is authorized to do until the service account is deleted, modified, etc.
Now on to the real issue.
Your Signed JWT has expired and you need to create a new one. You need to track the lifetime of tokens that you create and recreate/refresh the tokens before they expire. The default expiration in Google's world is 3,600 seconds. Since you are creating your own token, there is no "wrapper" code around your token to handle expiration.
The error that you are getting is caused by a code crash. Since you did not include your code, I cannot tell you where. However, the solution is to catch errors so that expiration exceptions can be managed.
I recommend instead of creating the Google Drive Client using a Signed JWT that you create the client with a service account. Token expiration and refresh will be managed for you.
Very few Google services still support Signed JWTs (which your code is using). You should switch to using service accounts, which start off with a Signed JWT and then exchange that for an OAuth 2.0 Access Token internally.
There are several libraries that you can use. Either of the following will provide the features that you should be using instead of crafting your own Signed JWTs.
https://github.com/googleapis/google-auth-library-nodejs
https://github.com/googleapis/google-api-nodejs-client
The following code is an "example" and is not meant to be tested and debugged. Change the scopes in this example to match what you require. Remove the section where I load a service-account.json file and replace with your AWS Secrets code. Fill out the code with your required functionality. If you have a problem, create a new question with the code that you wrote and detailed error messages.
const {GoogleAuth} = require('google-auth-library');
const {google} = require('googleapis');
const key = require('service-account.json');
/**
* Instead of specifying the type of client you'd like to use (JWT, OAuth2, etc)
* this library will automatically choose the right client based on the environment.
*/
async function main() {
const auth = new GoogleAuth({
credentials: {
client_email: key.client_email,
private_key: key.private_key,
},
scopes: 'https://www.googleapis.com/auth/drive.metadata.readonly'
});
const drive = google.drive('v3');
// List Drive files.
drive.files.list({ auth: auth }, (listErr, resp) => {
if (listErr) {
console.log(listErr);
return;
}
resp.data.files.forEach((file) => {
console.log(`${file.name} (${file.mimeType})`);
});
});
}
main()
I have an HTTP-triggered function running on Google Cloud Functions, which uses require('googleapis').sheets('v4') to write data into a docs spreadsheet.
For local development I added an account via the Service Accounts section of their developer console. I downloaded the token file (dev-key.json below) and used it to authenticate my requests to the Sheets API as follows:
var API_ACCT = require("./dev-key.json");
let apiClient = new google.auth.JWT(
API_ACCT.client_email, null, API_ACCT.private_key,
['https://www.googleapis.com/auth/spreadsheets']
);
exports.myFunc = function (req, res) {
var newRows = extract_rows_from_my_client_app_request(req);
sheets.spreadsheets.values.append({
auth: apiClient,
// ...
resource: { values:newRows }
}, function (e) {
if (e) res.status(500).json({err:"Sheets API is unhappy"});
else res.status(201).json({ok:true})
});
};
After I shared my spreadsheet with my service account's "email address" e.g. local-devserver#foobar-bazbuzz-123456.iam.gserviceaccount.com — it worked!
However, as I go to deploy this to the Google Cloud Functions service, I'm wondering if there's a better way to handle credentials? Can my code authenticate itself automatically without needing to bundle a JWT key file with the deployment?
I noticed that there is a FUNCTION_IDENTITY=foobar-bazbuzz-123456#appspot.gserviceaccount.com environment variable set when my function runs, but I do not know how to use this in the auth value to my googleapis call. The code for google.auth.getApplicationDefault does not use that.
Is it considered okay practice to upload a private JWT token along with my GCF code? Or should I somehow be using the metadata server for that? Or is there a built-in way that Cloud Functions already can authenticate themselves to other Google APIs?
It's common to bundle credentials with a function deployment. Just don't check them into your source control. Cloud Functions for Firebase samples do this where needed. For example, creating a signed URL from Cloud Storage requires admin credentials, and this sample illustrates saving that credential to a file to be deployed with the functions.
I'm wondering if there's a better way to handle credentials? Can my
code authenticate itself automatically without needing to bundle a JWT
key file with the deployment?
Yes. You can use 'Application Default Credentials', instead of how you've done it, but you don't use the function getApplicationDefault() as it has been deprecated since this Q was posted.
The link above shows how to make a simple call using the google.auth.getClient API, providing the desired scope, and have it decide the credential type needed automatically. On cloud functions this will be a 'Compute' object, as defined in the google-auth-library.
These docs say it well here...
After you set up a service account, ADC can implicitly find your
credentials without any need to change your code, as described in the
section above.
Where ADC is Application Default Credentials.
Note that, for Cloud Functions, you use the App Engine service account:
YOUR_PROJECT_ID#appspot.gserviceaccount.com, as documented here. That is the one you found via the FUNCTION_IDENTITY env var - this rather tripped me up.
The final step is to make sure that the service account has the required access as you did with your spreadsheet.
I'm building a monitoring tool based on AWS Lambda. Given a set of metrics, the Lambdas should be able to send SMS using Twilio API. To be able to use the API, Twilio provide an account SID and an auth token.
How and where should I store these secrets?
I'm currently thinking to use AWS KMS but there might be other better solutions.
Here is what I've come up with. I'm using AWS KMS to encrypt my secrets into a file that I upload with the code to AWS Lambda. I then decrypt it when I need to use them.
Here are the steps to follow.
First create a KMS key. You can find documentation here: http://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html
Then encrypt your secret and put the result into a file. This can be achieved from the CLI with:
aws kms encrypt --key-id some_key_id --plaintext "This is the scret you want to encrypt" --query CiphertextBlob --output text | base64 -D > ./encrypted-secret
You then need to upload this file as part of the Lambda. You can decrypt and use the secret in the Lambda as follow.
var fs = require('fs');
var AWS = require('aws-sdk');
var kms = new AWS.KMS({region:'eu-west-1'});
var secretPath = './encrypted-secret';
var encryptedSecret = fs.readFileSync(secretPath);
var params = {
CiphertextBlob: encryptedSecret
};
kms.decrypt(params, function(err, data) {
if (err) console.log(err, err.stack);
else {
var decryptedSecret = data['Plaintext'].toString();
console.log(decryptedSecret);
}
});
I hope you'll find this useful.
As of AWS Lambda support for NodeJS 4.3, the correct answer is to use Environment Variables to store sensitive information. This feature integrates with AWS KMS, so you can use your own master keys to encrypt the secrets if the default is not enough.
Well...that's what KMS was made for :) And certainly more secure than storing your tokens in plaintext in the Lambda function or delegating to a third-party service.
If you go down this route, check out this blog post for an existing usage example to get up and running faster. In particular, you will need to add the following to your Lambda execution role policy:
"kms:Decrypt",
"kms:DescribeKey",
"kms:GetKeyPolicy",
The rest of the code for the above example is a bit convoluted; you should really only need describeKey() in this case.
There is a blueprint for a Nodejs Lambda function that starts off with decrypting an api key from kms. It provides an easy way to decrypt using a promise interface. It also gives you the role permissions that you need to give the lambda function in order to access kms. The blue print can be found by searching for "algorithmia-blueprint"
Whatever you choose to do, you should use a tool like GitMonkey to monitor your code repositories and make sure your keys aren't committed or pushed to them.