I am using sts:assumeRole to connect to a s3 bucket of a different account.
Now, the job that I run takes a few days and along the way the credentials expire and I needed a way to renew them.
I have written the following code to handle expiry of the temporary credentials
This code is inside my downloadFile():
return new Promise((resolve, reject) => {
function responseCallback(error, data) {
if (error) {
const errorMessage = `Fail to download file from s3://${config().s3.bucket}/${path}: ${error};`;
reject(error);
} else {
Logger.info(`Successfully download file from s3://${config().s3.bucket}/${path}`);
resolve(data.Body);
}
}
const fn = this.s3Client.getObject({
Bucket: config().s3.bucket,
Key: path
}, (error, data) => this.handleTokenExpiry(error, data, fn, responseCallback));
});
And this is the handleTokenExpiry()
handleTokenExpiry(error, data, fn, callback) {
if (!error || error.code !== "ExpiredToken") return callback(error, data);
Logger.info("Token expired, creating new token");
this.s3Client = null; // null so that init() doesn't return existing s3Client
return this.init().then(fn);
}
Here init() is the method which sets this.s3Client using sts:assumeRole
and then new AWS.S3()
This works fine but I am not sure if this a clean way to do it. The strange thing is when I test it in local it takes almost two minutes for responseCallback() to be called when token is expired. Though responseCallback() gets executed immediately while the token is active.
For tasks running less than 12h, here is the solution.
When using AssumeRole, you can specify DurationSeconds argument to specify the duration of the temporary credentials returned by STS. This is 15 min minimum, up to 12h.
The role you are assuming needs to be modified to authorize the maximum duration too. See https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session
In your case, with a job running for several days. I would suggest to refactor the app to run in smaller batches, each running for a few hours each.
Another alternative would be to be proactive about token expiration. If your code know the token duration and the time at which it acquired the token, I would suggest to call a method before calling a method that uses the token (such as S3's getObject). That method you check if the token are soon to expire and proactively refresh them. Pseudo code would be like
function refreshToken() {
return new Promise( (resolve, reject) => {
// XX depends how long is your S3 getObject call
if (token_acquisition_time + token_duration <= now() + xx minutes) {
// refresh token
sts.assumeRole(...).promise().then(resolve());
} else {
resolve();
}
});
}
...
refreshToken.then(s3.getObject(...));
The AWS SDK can handle refreshing the credentials for you. For example:
const credentials = new ChainableTemporaryCredentials({
params: {
RoleArn: "optional-role-arn",
RoleSessionName: `required-parameter-${Date.now()}`
}
})
const s3 = new AWS.S3({credentials})
Now AWS SDK will refresh the tokens behind the scenes without any action from caller of s3.
For more information, please see AWSK SDK Documentation. Refresh is limited to validity time of the credentials used.
Related
Click here to see Overview Diagram
Hi All,
I have service A that needs to call service B in different network domain. To make a call to service B, service A gets access token from identity provider then call service B with the access token in Http Authorization header. When there are multiple or concurrent requests to service A, I want to minimize the calls to identity provider to get access token. So I plan to implement caching by using https://www.npmjs.com/package/lru-cache which is similar to the approach using by google-auth-library
https://github.com/googleapis/google-auth-library-nodejs/blob/master/src/auth/jwtaccess.ts.
The service A will call identity provider to get access token and store to the cache. When the next request come in, the service A will use the token from cache and calls service B. If the cache item is expired, then service A will get service token and store in cache.
I have the following questions:
How do we handle race condition when there are concurrent request to service A that can cause multiple requests are sent to get access token and have multiple updates to the cache?
Let say, access token have 1 hour expiry. How do we have mechanism to get a new token before the token is expired?
Any comments would be very appreciated. Thank you in advance.
It sounds like you would benefit from a little singleton object that manages the token for you. You can create an interface for getting the token that does the following:
If no relevant token in the cache, go get a new one and return a promise that will resolve with the token. Store that promise in the cache in place of the token.
If there is a relevant token in the cache, check it's expiration. If it has expired or is about to expire, delete it and go to step 1. If it's still good, return a promise that resolves with the cached token (this way it always returns a promise, whether cached or not).
If the cache is in the process of getting a new token, there will be a fresh token stored in the cache that represents the future arrival of the new token so the cache can just return that promise and it will resolve to the token that is in the process of being fetched.
The caller's code would look like this:
tokenCache.getToken().then(token => {
// use token here
});
All the logic behind steps 1, 2 and 3 is encapsulated inside the getToken() method.
Here's an outline for a tokenCache class that hopefully gives you the general idea:
const tokenExpiration = 60 * 60 * 1000; // 1 hr in ms
const tokenBeforeTime = 5 * 60 * 1000; // 5 min in ms
class tokenCache {
constructor() {
this.tokenPromise = null;
this.timer = null;
// go get the first token
this._getNewToken().catch(err => {
console.log("error fetching initial token", err);
});
}
getToken() {
if (this.tokenPromise) {
return this.tokenPromise().then(tokenData => {
// if token has expired
if (tokenData.expires < Date.now()) {
return this._getNewToken();
} else {
return tokenData.token;
}
});
} else {
return this._getNewToken();
}
}
// non-public method for getting a new token
_getNewToken() {
// for example purposes, this uses the got() library to make an http request
// you fill in however you want to contact the identity provider to get a new token
this.tokenPromise = got(tokenURL).then(token => {
// make resolve value be an object that contains the token and the expiration
// set timer to get a new token automatically right before expiration
this._scheduleTokenRefresh(tokenExpiration - tokenBeforeTime);
return {
token: token,
expires: Date.now() + tokenExpiration;
}
}).catch(err => {
// up error, clear the cached promise, log the error, keep the promise rejected
console.log(err);
this.tokenPromise = null;
throw err;
});
return this.tokenPromise;
}
// schedule a call to refresh the token before it expires
_scheduleTokenRefresh(t) {
if (this.timer) {
clearTimeout(this.timer);
}
this.timer = setTimeout(() => {
this._getNewToken().catch(err => {
console.log("Error updating token before expiration", err);
});
this.timer = null;
}, t);
}
}
How do we handle race condition when there are concurrent request to service A that can cause multiple requests are sent to get access token and have multiple updates to the cache?
You store a promise and always return that promise. Whether you're in the middle of getting a new token or there's already a token in that promise, it doesn't matter. You return the promise and the caller uses .then() or await on the promise to get the token. It "just works" either way.
Let say, access token have 1 hour expiry. How do we have mechanism to get a new token before the token is expired?
You can check the token for expiration when it's requested and if it's expired, you replace the existing promise with one that represents a new request for the token.
I am working on a Zapier app and there is a tenant id (integer) that is retrieved during authentication that I need to use in a trigger. What is the correct way to do this?
I have tried using global, bundle.authData and storing the data in a module, but nothing seems to work consistently. The best has been when I stored the data in global, but it is inconsistent, out of six calls to the trigger the tenant id may only be valid twice, the other four times it will be returned as undefined.
In the case of global I am writing the data during authentication:
const test = (z, bundle) => {
return z.request({
url: URL_PATH + ':' + URL_PORT + '/v1/auth',
params: {
username: bundle.authData.username,
password: bundle.authData.password
}
}).then((response) => {
if (response.status === 401) {
throw new Error('The username and/or password you supplied is incorrect.');
} else {
global.GLOBAL_tenant = response.json.tenant;
// ...
}
}
And then attempting to read the data back in the trigger:
const processTransactions = (z, bundle) => {
let jsonAll = [];
let tenant = global.GLOBAL_tenant;
return new Promise( (resolve, reject) => {
(function loop() {
// ...
I also tried adding the dat to 'bundle.authData', this was the recommendation that Zapier made when I contacted them, but the tenant id that I added during the authentication:
bundle.authData.tenant = response.json.tenant
Is not available when I try to retrieve it in the trigger. Only the 'username' and 'password' are present.
I am new to Zapier and node.js so any help will be greatly appreciated.
Instead of returning fully qualified name like bundle.authData.tenant = response.json.tenant, please use something like tenant = response.json.tenant and this statement should be enclosed in a return statement preferably. The bundle.authData qualifier is automatically applied by Zapier.
global variables should be avoided. Hope this helps.
David here, from the Zapier Platform team.
global isn't going to work because your code runs in multiple lambda executions and state isn't stored between them. Plus, global implies it would be the same for all users, which probably isn't what you want.
Instead, I'd check out session auth, which will let you store extra fields during your test by creating a computed field and returning values for it from sessionConfig.perform. Then it'll be stored in the auth object, next to the username and password.
Separately, you may want to consider whatever code is in processTransactions. Either you can return them all and they'll deduped on our end, or you're doing a bunch of extra computation that is better dehydrated. That's just a guess on my part though, so feel free to ignore this part.
I recently changed my aws credentials in my .env file
AWS_ACCESS_KEY_ID=XXXXXXXXXXXXX
AWS_SECRET_ACCESS_KEY=
However on every s3.getSignedUrl request, the SDK uses the previous (root) credentials.
s3.getSignedUrl('putObject', s3Params, (err, data) => {
if (err) {
return res.end();
}
console.log(data) <---------------
const returnData = {
signedRequest: data,
awsImageUrl: `https://${S3_BUCKET}.s3.amazonaws.com/${imageName}`
};
res.json(returnData);
res.end();
});
This logs
https://my-bucket.s3.amazonaws.com/my-pic.png?AWSAccessKeyId=YYYYYYYYYYYYYYYContent-Type=image%2Fpng&Expires=SOMEDATE&Signature=SOMESIGNATURE&x-amz-acl=public-read
YYYYYYYYYYYYYYY is the previous, root credentials
Is it possible that the SDK caches this data?
If so how do I invalidate it?
Or have I overlooked something in code?
AWS Documentation
Expiring and Refreshing Credentials
Occasionally credentials can expire in the middle of a long-running
application. In this case, the SDK will automatically attempt to
refresh the credentials from the storage location if the Credentials
class implements the refresh() method.
If you are implementing a credential storage location, you will want
to create a subclass of the Credentials class and override the
refresh() method. This method allows credentials to be retrieved from
the backing store, be it a file system, database, or some network
storage. The method should reset the credential attributes on the
object.
When seeking credentials, the JavaScript and Node SDKs use the AWS.CredentialProviderChain.
The default credentials providers are:
AWS.CredentialProviderChain.defaultProviders = [function () {
return new AWS.EnvironmentCredentials('AWS');
}, function () {
return new AWS.EnvironmentCredentials('AMAZON');
}, function () {
return new AWS.SharedIniFileCredentials();
}, function () {
if (AWS.ECSCredentials.prototype.getECSRelativeUri() !== undefined) {
return new AWS.ECSCredentials();
}
return new AWS.EC2MetadataCredentials();
}]
Thus, it looks in the following locations:
Environment credentials
~/.aws/credentials file
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Instance metadata
I am trying to use encrypted environment variables in an AWS Lambda function running in Node.js 4.3, but the code hangs when trying to decrypt the variables. I don't get any error messages, it just times out. Here is what I have tried:
I created the encryption key in the same region as the Lambda, and ensured that the role the Lambda runs as has access to the key. (I've even tried giving the role full control of the key.)
When creating the Lambda, I enable encryption helpers, select my encryption key, and encrypt the environment variable:
Next I click the "Code" button which gives me javascript code that's supposed to handle the decryption at runtime. Here is the code--the only change I have made is to add console.log statements and I added a try/catch:
"use strict";
const AWS = require('aws-sdk');
const encrypted = process.env['DBPASS'];
let decrypted;
function processEvent(event, context, callback) {
console.log("Decrypted: " + decrypted);
callback();
}
exports.handler = (event, context, callback) => {
if (decrypted) {
console.log('data is already decrypted');
processEvent(event, context, callback);
} else {
console.log('data is NOT already decrypted: ' + encrypted);
// Decrypt code should run once and variables stored outside of the function
// handler so that these are decrypted once per container
const kms = new AWS.KMS();
console.log('got kms object');
try {
var myblob = new Buffer(encrypted, 'base64');
console.log('got blob');
kms.decrypt({ CiphertextBlob: myblob }, (err, data) => {
console.log('inside decrypt callback');
if (err) {
console.log('Decrypt error:', err);
return callback(err);
}
console.log('try to get plaintext');
decrypted = data.Plaintext.toString('ascii');
console.log('decrypted: ' + decrypted);
processEvent(event, context, callback);
});
}
catch(e) {
console.log("exception: " + e);
callback('error!');
}
}
};
Here is what I get when I run the function:
data is NOT already decrypted: AQECAH.....
got kms object
got blob
END RequestId: 9b7af.....
Task timed out after 30.00 seconds
When I run the function, it times out. I see that it prints all log statements up to "got blob" then it just stops. No error message other than timed out. I've tried increasing timeout and memory for the Lambda but it just makes it wait longer before timing out.
How is decryption supposed to work when I never tell the app what decryption key to use? The documentation for decrypt does not mention any way to tell it what decryption key to use. And I am not getting any error messages that would tell me it doesn't know what key to use or anything.
I've tried going through this tutorial but it just tells me to do the same thing I've already done. I've also read all of the environment variables documentation but it says that what I'm doing should just work.
Decrypting the environment variables requires an API call to the KMS service. To do that, your Lambda function must have access to the internet since there are no VPC endpoints for KMS. So, if your Lambda is running in a VPC, make sure you have a NAT configured for the VPC to allow your Lambda function to call KMS.
I am using jsforce node module for doing CRUD operation in salesforce.
For making a connection to salesforce, I have following input
username, password, securityToken and loginUrl.
Here's how I make a connection first time.
var conn = new jsforce.Connection({
loginUrl: connectionDetails.salesforce.loginUrl
});
conn.login(connectionDetails.salesforce.username,
connectionDetails.salesforce.password + connectionDetails.salesforce.securityToken,
function(err, userInfo) {
if (!err) {
console.log('User with user id ' + userInfo.id + ' successfully logged into Salesforce');
successCb(conn.accessToken, conn.instanceUrl);
} else {
console.log('Login failed to https://test.salesforce.com/');
errorCb('Login failed to https://test.salesforce.com/');
}
});
I store the accessToken and Instanceurl in the req object provided by Express.
After that any CRUD operation I perform like below
var salesConn = new jsforce.Connection({
accessToken: salesforceAccessToken,
instanceUrl: salesforceInstanceUrl
});
salesConn.sobject('Lead').retrieve(someLeadID, function(err, data) {
...
});
Now suppose I keep my server idle for few hours or may be even a day, then if I do a CRUD operation then the call fails. This I am pretty sure that the session has expired.
Now I have two queries
Is the above correct way of making connection to salesforce using the input connection details I have?
How can I know that the session has expired and make a new session?
PS
I tried to look into the Access Token with Refresh Token, but that is only available with OAuth2 authorization code flow.
A little late, but I had the same issue today and I fixed it by calling conn.login(username, password+token) whenever I get an invalid session error.
I am doing something different though, I'm not creating a second variable to use with my SF calls, but instead use the original conn variable, conn.sobject(...).
It would refresh token automatically.
my jsforce version is "jsforce": "^1.4.1"
jsforce has a _refreshDelegate
Connection.prototype.login = function(username, password, callback) {
// register refreshDelegate for session expiration
this._refreshDelegate = new HttpApi.SessionRefreshDelegate(this, createUsernamePasswordRefreshFn(username, password));
if (this.oauth2 && this.oauth2.clientId && this.oauth2.clientSecret) {
return this.loginByOAuth2(username, password, callback);
} else {
return this.loginBySoap(username, password, callback);
}
};