AWS/Node.js S3 Signed URL Denied - node.js

I'm using Amazon's Node.js aws-sdk to create expiring pre-signed S3 URLs for digital product downloads, and struggling with the result. I've got the SDK configured with my keys successfully, and I've tried both a synchronous approach (not shown) and an async approach (shown) at collecting signed urls. The calls work, I never hit any errors and I am successfully returned signed URLs. Here's the twist: the URLs I get back don't work.
const promises = skus.map(function(sku) {
const key = productKeys[sku];
return new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', {
Bucket: 'my-products',
Key: key,
Expires: 60 * 60 * 24, // Time in seconds; 24 hours
}, function(err, res) {
if (err) {
reject(err);
} else {
resolve({
text: productNames[sku],
url: res,
});
}
});;
});
});
I had assumed it was an error with the keys I had allocated, which I had assigned to an IAM User who has full S3 bucket access. So, I tried using a root level keypair and I get the same access denied result. Interestingly: the URLs I get back take the form https://my-bucket.s3.amazonaws.com/Path/To/My/Product.zip?AWSAccessKeyId=blahblahMyKey&Expires=43914919&Signature=blahblahmysig&x-amz-security-token=hugelongstring. I've not seen this x-amz-security-token thing before, and if I try just removing that query param, I get Access Denied but for a different reason: the AWSAccessKeyId is one that is not associated with any of my accounts. It's not the one I've configured the SDK with and it's not one I've allocated on my S3 account. No idea where it comes from, and no idea how that relates to the x-amz-security-token param.
Anyway, I'm stumped. I just want a working pre-signed url... what gives? Thanks for your help.

Related

SendGrid API v3 using NodeJS #sendgrid/client - Call is Unauthorized / API Key is valid with full permissions

I'm really struggling with a problem where every call I make to the #sendgrid/client comes back unauthorized. The exact same key can be used with #sendgrid/mail in the same code and it works fine.
I've tried everything but cannot get the NodeJS #sendgrid/client to work. I've even tried a brand new API Key and still get unauthorized.
I'm trying to use the /suppressions API. I can make the exact same call, with the same API key, in Postman and it works fine.
What am I doing wrong here?
Here is my code:
sgApiKey = config.get('sendGrid_APIKey');
sgClient.setApiKey(sgApiKey);
const email = "test#abc.com";
const headers = {
"on-behalf-of": "The subuser's username. This header generates the API call as if the subuser account was making the call."
};
const request = {
url: `/v3/asm/suppressions/global/${email}`,
method: 'GET',
headers: headers
}
sgClient.request(request)
.then(([response, body]) => {
console.log(response.statusCode);
console.log(response.body);
})
.catch(error => {
console.error(error);
});
I have tried creating a new API key with full permissions and it still does not work.
I'm expecting that my existing API key I already use for emailing should work to all the /suppression API.

How to query the gitlab API from the browser?

Just to give some context, I'd like to implement a blog with gitlab pages, so I want to use snippets to store articles and comments. The issue is that querying the API from the browser triggers a CORS error. Here is the infamous code:
const postJson = function(url, body) {
const client = new XMLHttpRequest();
client.open('POST', url);
client.setRequestHeader('Content-Type', 'application/json');
return new Promise((resolve, reject) => {
client.onreadystatechange = () => {
if (client.readyState === 4) {
client.status === 200
? resolve(client.responseText)
: reject({status: client.status, message: client.statusText, response: client.responseText})
}
}
client.send(body)
})
};
postJson('https://gitlab.com/api/graphql', `query {
project(fullPath: "Boiethios/test") {
snippets {
nodes {
title
blob {
rawPath
}
}
}
}
}`).then(console.log, console.error);
That makes perfect sense, because it would allow to fraudulently use the user's session.
There are several options:
Ideally, I would like to have an option to disable all form of authentication (particularly the session), so I could only access the information that is public for everybody.
I could use a personal access token, but I'm not comfortable with this, because the scopes are not fine-grained at all, and leaking such a PAT would allow everybody to see everything in my account. (doesn't work)
I could use OAuth2 to ask for every reader the authorization to access their gitlab account, but nobody wants to authenticate to read something.
I could create a dummy account, and then create a PAT. That's the best IMO, but that adds some unnecessary complexity. (doesn't work)
What is to correct way to query the gitlab API from the browser?
After some research, I have found this way to get the articles and the comments. The CORS policy was triggered because of the POST request with a JSON content. A mere GET request does not have this restriction.
I could recover the information in 2 times:
I created a dummy account, so that I could have a token to query the API for my public information only,
Then I used the API V4 instead of the GraphQL one:
// Gets the snippets information:
fetch('https://gitlab.com/api/v4/projects/7835068/snippets?private_token=AmPeG6zykNxh1etM-hN3')
.then(response => response.json())
.then(console.log);
// Gets the comments of a snippet:
fetch('https://gitlab.com/api/v4/projects/7835068/snippets/1742788/discussions?private_token=AmPeG6zykNxh1etM-hN3')
.then(response => response.json())
.then(console.log);

Multiple download request . from S3 signedUrl using Node.js

I have created around 500 signed URLs of objects located in S3. Now when I try to download those objects from signed URL in a loop of
await Promise.all(signedUrls.map(async (url) => {
const val = await request(url, (error, response) => {
if (!error) {
console.log('Downloaded successfully');
} else {
console.log('error in downloading', error.message);
}
});
}));
I get this error for some of the URLs.
error in downloading getaddrinfo ENOTFOUND s3.amazonaws.com s3.amazonaws.com:443
I know all the signed URLs created are correct which I checked individually, but suspect that S3 there is an issue in downloading the files.
Need to check if there is any limit on S3 for requesting too many files.
S3 has no practical limit on the number of downloads or the number of concurrent downloads. In theory, there must be a limit because they have a finite amount of hardware in the AWS data centers, but that limit is so high that in practice you cannot reach it.

aws-sdk s3.getObject with redirect

In short, I'm trying to resize an image through through a redirect, aws lambda, and the aws-sdk.
Following along the tutorial on resizing an image on the fly with AWS, AWS - resize-images-on-the-fly, I've managed to make everything work according to the walkthrough, however my question is related to making the call to the bucket.
Currently the only way I can get this to work is by calling,
http://MY_BUCKET_WEBSITE_HOSTNAME/25×25/blue_marble.jpg.
If the image isn't available, the request is redirected, image resized, and then placed back in the bucket.
What I would like to do, is access the bucket in the aws-sdk through the s3.getObject() call, rather than to that direct link.
As of now, I can only access the images that are currently in the bucket, so the redirect is never happening.
My thought was the request wasn't being sent to the correct endpoint and from what I was able to find online, I changed the way the sdk was created to this -
s3 = new aws.S3({
accessKeyId: "myAccessKeyId",
secretAccessKey: "mySecretAccessKey",
region: "us-west-2",
endpoint: '<MYBUCKET>.s3-website-us-west-2.amazonaws.com',
s3BucketEndpoint: true,
sslEnabled: false,
signatureVersion: 'v4'
})
params = {
Bucket: 'MY_BUCKET',
Key: '85x85/blue_marble.jpg'
};
s3.getObject(params, (error, data) => data);
From what I can tell the endpoints in the request look correct.
When I visit the endpoints directly in the browser, everything works as expected.
But when using the sdk, only available images return. There is no redirect, no data returns, and I get the error.
XMLParserError: Non-whitespace before first tag.
Not sure if it's possible to do with s3.getObject(), seems like it may, but I can't seem to figure it out.
Use headObject to check if the object exists. If not you can call your API to do the resize & then retry the get after the resize.
var params = {
Bucket: config.get('s3bucket'),
Key: path
};
s3.headObject(params, function (err, metadata) {
if (err && err.code === 'NotFound') {
// Call your resize API here. Once your resize API returns a success, you can get the object\URL.
} else {
s3.getSignedUrl('getObject', params, callback); //Use this secure URL to access the object.
}
});

S3 Force File Download with NodeJS

I am trying to force files to download from Amazon S3 using the GET request parameter response-content-disposition.
I first created a signed URL which works fine when I want to view the file.
I then attempt to redirect there with the response-content-disposition header. Here is my code:
res.writeHead(302, {
'response-content-disposition': 'attachment',
'Location': 'http://s3-eu-west-1.amazonaws.com/mybucket/test/myfile.txt?Expires=1501018110&AWSAccessKeyId=XXXXXX&Signature=XXXXX',
});
However, this just redirects to the file and does not download it.
Also when I try and visit with the file with the response-content-disposition as GET variable:
http://s3-eu-west-1.amazonaws.com/mybucket/test/myfile.txt?Expires=1501018110&AWSAccessKeyId=XXXXXX&Signature=XXXXX&response-content-disposition=attachment
..I reveive the following response:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Hi you can force download a file or can change file name using below sample code. This sample code is to download a file using preSignedUrl.
The important thing here is the ResponseContentDisposition key in params of getSignedUrl method. No need to pass any header in your request like content-disposition ..
var aws = require('aws-sdk');
var s3 = new aws.S3();
exports.handler = function (event, context) {
var params = {
Bucket: event.bucket,
Key: event.key,
ResponseContentDisposition :'attachment;filename=' + 'myprefix' + event.key
};
s3.getSignedUrl('getObject', params, function (err, url) {
if (err) {
console.log(JSON.stringify(err));
context.fail(err);
}
else {
context.succeed(url);
}
});
};
The correct way of using the response-content-disposition option is to include it as a GET variable but you're not calculating the signature correctly.
You can find more information on how you should calculate the signature in the Amazon REST Authentication guide

Resources