aws-sdk using old ACCESS_KEY_ID - node.js

I recently changed my aws credentials in my .env file
AWS_ACCESS_KEY_ID=XXXXXXXXXXXXX
AWS_SECRET_ACCESS_KEY=
However on every s3.getSignedUrl request, the SDK uses the previous (root) credentials.
s3.getSignedUrl('putObject', s3Params, (err, data) => {
if (err) {
return res.end();
}
console.log(data) <---------------
const returnData = {
signedRequest: data,
awsImageUrl: `https://${S3_BUCKET}.s3.amazonaws.com/${imageName}`
};
res.json(returnData);
res.end();
});
This logs
https://my-bucket.s3.amazonaws.com/my-pic.png?AWSAccessKeyId=YYYYYYYYYYYYYYYContent-Type=image%2Fpng&Expires=SOMEDATE&Signature=SOMESIGNATURE&x-amz-acl=public-read
YYYYYYYYYYYYYYY is the previous, root credentials
Is it possible that the SDK caches this data?
If so how do I invalidate it?
Or have I overlooked something in code?

AWS Documentation
Expiring and Refreshing Credentials
Occasionally credentials can expire in the middle of a long-running
application. In this case, the SDK will automatically attempt to
refresh the credentials from the storage location if the Credentials
class implements the refresh() method.
If you are implementing a credential storage location, you will want
to create a subclass of the Credentials class and override the
refresh() method. This method allows credentials to be retrieved from
the backing store, be it a file system, database, or some network
storage. The method should reset the credential attributes on the
object.

When seeking credentials, the JavaScript and Node SDKs use the AWS.CredentialProviderChain.
The default credentials providers are:
AWS.CredentialProviderChain.defaultProviders = [function () {
return new AWS.EnvironmentCredentials('AWS');
}, function () {
return new AWS.EnvironmentCredentials('AMAZON');
}, function () {
return new AWS.SharedIniFileCredentials();
}, function () {
if (AWS.ECSCredentials.prototype.getECSRelativeUri() !== undefined) {
return new AWS.ECSCredentials();
}
return new AWS.EC2MetadataCredentials();
}]
Thus, it looks in the following locations:
Environment credentials
~/.aws/credentials file
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Instance metadata

Related

Node.JS PowerBI App Owns Data for Customers w/ Service Principal (set "config.json" from a table in my database)

I'm attempting to refactor the "Node.JS PowerBI App Owns Data for Customers w/ Service Principal" code example (found HERE).
My objective is to import the data for the "config.json" from a table in my database and insert the "workspaceId" and "reportId" values from my database into the "getEmbedInfo()" function (inside the "embedConfigServices.js" file). Reason being, I want to use different configurations based on user attributes. I am using Auth0 to login users on the frontend, and I am sending the user metadata to the backend so that I can filter the database query by the user's company name.
I am able to console.log the config data, but I am having difficulty figuring out how to insert those results into the "getEmbedInfo()" function.
It feels like I'm making a simple syntax error somewhere, but I am stuck. Here's a sample of my code:
//----Code Snippet from "embedConfigServices.js" file ----//
async function getEmbedInfo() {
try {
const url = ;
const set_config = async function () {
let response = await axios.get(url);
const config = response.data;
console.log(config);
};
set_config();
const embedParams = await getEmbedParamsForSingleReport(
config.workspaceId,
config.reportId
);
return {
accessToken: embedParams.embedToken.token,
embedUrl: embedParams.reportsDetail,
expiry: embedParams.embedToken.expiration,
status: 200,
};
} catch (err) {
return {
status: err.status,
error: err.statusText,
}
};
}
}
This is the error I am receiving on the frontend:
"Cannot read property 'get' of undefined"
Any help would be much appreciated. Thanks in advance.
Carlos
The error is because of fetching wrong URL. The problem is with the config for the Service Principal. We will need to provide reportId, workspaceId for the SPA and also make sure you added the service principal to workspace and followed all the steps from the below documentation for the service principal authentication.
References:
https://learn.microsoft.com/power-bi/developer/embedded/embed-service-principal

Find a better way to renew AWS credentials

I am using sts:assumeRole to connect to a s3 bucket of a different account.
Now, the job that I run takes a few days and along the way the credentials expire and I needed a way to renew them.
I have written the following code to handle expiry of the temporary credentials
This code is inside my downloadFile():
return new Promise((resolve, reject) => {
function responseCallback(error, data) {
if (error) {
const errorMessage = `Fail to download file from s3://${config().s3.bucket}/${path}: ${error};`;
reject(error);
} else {
Logger.info(`Successfully download file from s3://${config().s3.bucket}/${path}`);
resolve(data.Body);
}
}
const fn = this.s3Client.getObject({
Bucket: config().s3.bucket,
Key: path
}, (error, data) => this.handleTokenExpiry(error, data, fn, responseCallback));
});
And this is the handleTokenExpiry()
handleTokenExpiry(error, data, fn, callback) {
if (!error || error.code !== "ExpiredToken") return callback(error, data);
Logger.info("Token expired, creating new token");
this.s3Client = null; // null so that init() doesn't return existing s3Client
return this.init().then(fn);
}
Here init() is the method which sets this.s3Client using sts:assumeRole
and then new AWS.S3()
This works fine but I am not sure if this a clean way to do it. The strange thing is when I test it in local it takes almost two minutes for responseCallback() to be called when token is expired. Though responseCallback() gets executed immediately while the token is active.
For tasks running less than 12h, here is the solution.
When using AssumeRole, you can specify DurationSeconds argument to specify the duration of the temporary credentials returned by STS. This is 15 min minimum, up to 12h.
The role you are assuming needs to be modified to authorize the maximum duration too. See https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session
In your case, with a job running for several days. I would suggest to refactor the app to run in smaller batches, each running for a few hours each.
Another alternative would be to be proactive about token expiration. If your code know the token duration and the time at which it acquired the token, I would suggest to call a method before calling a method that uses the token (such as S3's getObject). That method you check if the token are soon to expire and proactively refresh them. Pseudo code would be like
function refreshToken() {
return new Promise( (resolve, reject) => {
// XX depends how long is your S3 getObject call
if (token_acquisition_time + token_duration <= now() + xx minutes) {
// refresh token
sts.assumeRole(...).promise().then(resolve());
} else {
resolve();
}
});
}
...
refreshToken.then(s3.getObject(...));
The AWS SDK can handle refreshing the credentials for you. For example:
const credentials = new ChainableTemporaryCredentials({
params: {
RoleArn: "optional-role-arn",
RoleSessionName: `required-parameter-${Date.now()}`
}
})
const s3 = new AWS.S3({credentials})
Now AWS SDK will refresh the tokens behind the scenes without any action from caller of s3.
For more information, please see AWSK SDK Documentation. Refresh is limited to validity time of the credentials used.

Access Azure Batch from an Azure Function

I'm trying to use a Service Principle to access a Batch pool from an Azure Function and running into authentication issues that I don't understand. The initial login with the Service Principle works fine, but then using the credentials to access the batch pool returns a 401.
Below is a condensed version of my code with comments at the key points
module.exports.dispatch = function (context) {
MsRest.loginWithServicePrincipalSecret('AppId', 'Secret', 'TennantId', function(err, credentials){
if (err) throw err;
// This works as it prints the credentials
context.log(credentials);
var batch_client = new batch.ServiceClient(credentials, accountUrl);
batch_client.pool.get('mycluster', function(error, result){
if(error === null)
{
context.log('Accessed pool');
context.log(result);
}
else
{
//Request to batch service returns a 401
if(error.statusCode === 404)
{
context.log('Pool not found yet returned 404...');
}
else
{
context.log('Error occurred while retrieving pool data');
context.log(error);
}
//'Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly.
context.res = { body: error.body.message.value };
context.done();
}
});
});
};
How can the initial login with a service principle work no problem, but then the credentials it returns not be able to access the batch pool?
The actual error says to check the auth header on the request, which I can see and the Authorisation header isn't even present.
I've triple checked the Active Directory access control for the batch account the App ID and secret are the ones belonging to the owner of the batch account. Any ideas what to try next?
The credentials expected by the Azure Batch npm client aren't the Azure Active Directory credentials/token, but the keys for the batch account. You can list your keys using the Azure CLI with a command like the following:
az batch account keys list -g "<resource-group-name>" -n "<batch-account-name>"
sample here
Then you can create the credentials parameter with those keys:
var credentials = new batch.SharedKeyCredentials('your-account-name', 'your-account-key');
You could still involve a Service Principal here if you wanted to store your batch keys in something like Key Vault, but then your code would be:
Get Service Principal auth against key vault to fetch name and key
Use name and key to create credentials
You cannot use the same OAuth token returned from the Azure Resource Management endpoint with Batch. Assuming your service principal has the correct RBAC permissions, auth with the Azure Batch endpoint: https://batch.core.windows.net/ instead (assuming you are using Public Azure).
You do not need to get the shared key credentials for the Batch account, credentials via AAD should be used instead if you are using an AAD service principal.
I happened to run across this same issue and I didn't have the option of using SharedKeyCredentials so I wanted to share my solution in case anyone else finds it helpful.
As fpark mentions, we need to get an OAuth token to use with Batch instead of the default Azure Resource Management. Below is the original code posted by Mark with the minor modification needed to make it work with Batch:
module.exports.dispatch = function (context) {
let authOptions = {tokenAudience: 'batch'};
MsRest.loginWithServicePrincipalSecret('AppId', 'Secret', 'TennantId', authOptions, function(err, credentials){
if (err) throw err;
// This works as it prints the credentials
context.log(credentials);
var batch_client = new batch.ServiceClient(credentials, accountUrl);
batch_client.pool.get('mycluster', function(error, result){
if(error === null)
{
context.log('Accessed pool');
context.log(result);
}
else
{
//Request to batch service returns a 401
if(error.statusCode === 404)
{
context.log('Pool not found yet returned 404...');
}
else
{
context.log('Error occurred while retrieving pool data');
context.log(error);
}
//'Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly.
context.res = { body: error.body.message.value };
context.done();
}
});
});
};

firebase nodejs server and gcloud storage--get metadata

for the correct security of my app i have been obliged to set up a small nodejs server for firebase, but the problems are not over.... the firebase server sdk not support the storage apis. i read that for this is neccessary a gcloud storage apis because firebase use the same service.
in the server side is important to get a files custom metadata because i must read and update they. i not find the apposite functions to get a file metadata. in the client sdk is simple
// Create a reference to the file whose metadata we want to retrieve
var forestRef = storageRef.child('images/forest.jpg');
// Get metadata properties
forestRef.getMetadata().then(function(metadata) {
// Metadata now contains the metadata for 'images/forest.jpg'
}).catch(function(error) {
// Uh-oh, an error occurred!
});
and in gcloud storage what function can i use?? thanks
You'll want to use the getMetadata() method in gcloud:
var gcloud = require('gcloud');
// Initialize GCS
var gcs = gcloud.storage({
projectId: 'my-project',
keyFilename: '/path/to/keyfile.json'
});
// Reference an existing bucket
var bucket = gcs.bucket('foo.appspot.com');
// Reference to a file
var file = bucket.file('path/to/my/file');
// Get the file metadata
file.getMetadata(function(err, metadata, apiResponse) {
if (err) {
console.log(err);
} else {
console.log(metadata);
}
});

Is there a way to prevent users from editing the local storage session?

I am creating a relational blog where I make use of ember_simple_auth:session to store the session like
{"authenticated":{"authenticator":"authenticator:devise","token":"rh2f9iy7EjJXESAM5koQ","email":"user#example.com","userId":1}}
However, on the developer tools on Chrome (and possibly on other browsers), it is quite easy to edit the email and userId in order to impersonate another user upon page reload.
EDIT #1
From the conversation with Joachim and Nikolaj, I now realized that the best way to tackle this problem is to probe the localStorage authenticity every time I need it (which is only on page reload) instead of attempting to prevent edits.
In order to validate authenticity, I create a promise that must be solved before the AccountSession can be used. The promise serverValidation() requests to create a token model with the current localStorage info, and when the server gets it, it validates the info and responds 200 with a simple user serialization with type as token if the information is legit. You can check more info on the Source Code.
Session Account
import Ember from 'ember';
const { inject: { service }, RSVP } = Ember;
export default Ember.Service.extend ({
session: service('session'),
store: service(),
serverValidation: false,
// Create a Promise to handle a server request that validates the current LocalStorage
// If valid, then set SessionAccount User.
loadCurrentUser() {
if (!Ember.isEmpty(this.get('session.data.authenticated.userId'))) {
this.serverValidation().then(() => {
return new RSVP.Promise((resolve, reject) => {
const userId = this.get('session.data.authenticated.userId');
// Get User to Session-Account Block
if(this.get('serverValidation') === true) {
return this.get('store').find('user', userId).then((user) => {
this.set('user', user);
resolve();
}).catch((reason) => {
console.log(reason.errors);
var possible404 = reason.errors.filterBy('status','404');
var possible500 = reason.errors.filterBy('status','500');
if(possible404.length !== 0) {
alert('404 | Sign In Not Found Error');
this.get('session').invalidate();
}
else if(possible500.length !== 0) {
alert('500 | Sign In Server Error');
this.get('session').invalidate();
}
reject();
});
}
else{
alert('Session for Server Validation failed! Logging out!');
this.get('session').invalidate();
resolve();
}
});
});
} else {
// Session is empty...
}
},
serverValidation() {
return new RSVP.Promise((resolve) => {
var tokenAuthentication = this.get('store').createRecord('token', {
id: this.get('session.data.authenticated.userId'),
email: this.get('session.data.authenticated.email'),
authenticity_token: this.get('session.data.authenticated.token'),
});
tokenAuthentication.save().then(() => {
this.set('serverValidation',true);
console.log('Server Validation complete with 200');
resolve();
}).catch((reason) => {
this.set('serverValidation',false);
resolve();
});
});
}
});
Token Controller
# Users Controller: JSON response through Active Model Serializers
class Api::V1::TokensController < ApiController
respond_to :json
def create
if token_by_id == token_by_token
if token_by_email == token_by_id
render json: token_by_id, serializer: TokenSerializer, status: 200
else
render json: {}, status: 404
end
else
render json: {}, status: 404
end
end
private
def token_by_id
User.find(user_params[:id])
end
def token_by_email
User.find_by(email: user_params[:email])
end
def token_by_token
User.find_by(authentication_token: user_params[:authenticity_token])
end
def user_params
ActiveModelSerializers::Deserialization.jsonapi_parse!(params.to_unsafe_h)
end
end
There is no way to prevent a user from editing the content of his local storage, session storage, or cookies.
But this should not worry you. The user is identified through the value of the token. The token is generated and sent to him by the authenticator when he logs in. To impersonate another user by editing the session data he would have to know that the other user is logged in, and know the token of that user.
Token is already signed on the server side, a standard JWT mechanism.
Having said that, there can be a couple of ways to check tempering in local storage:
Generate a token the way you already do.
Generate a random secret key to be kept on the server.
Generate a corresponding HMAC using this secret key.
Send the token + HMAC to the user.
When the user sends you this token, first check if HMAC is correct, if not then reject the token right away.
If HMAC is correct, validate the token the way you already do.
Another way:
Along with the token, a HMAC checksum too can be stored separately, and when sent back to the server by the client, check if checksum matches.

Resources