I just need to get my url to do the upload in the front end.
But in my situation I don't know why, I can get the url but only after the second call.
const AWS = require('aws-sdk')
const s3 = new AWS.S3()
AWS.config.update({
accessKeyId: 'secretId',
secretAccessKey: 'secretAccessKeyId'
})
return s3.getSignedUrl('putObject', {
Bucket: 'eps-file-default',
Key: 'picture-test.png',
Expires: 300
})
Here you can see the first response :
"https://s3.amazonaws.com/" // the problem is here
And here you can see the second response :
"https://eps-file-default.s3.amazonaws.com/picture-test.png?AWSAccessKeyId=mysecret&Expires=1595246561&Signature=3uEK7zrqUDUv6hGriN3TraUnoOo%3D"
If you have the solution thank you so much.
I found the solution! I hope this can help someone, the config have the be before.
const AWS = require('aws-sdk')
AWS.config.update({
accessKeyId: 'secret',
secretAccessKey: 'secret2'
})
const s3 = new AWS.S3()
return s3.getSignedUrl('putObject', {
Bucket: bucketname,
Key: filename,
Expires: 900
})
Related
I'm using Minio Server to handle files in my nodejs API, basically to emulate s3 locally. I generated Presigned Url to upload images directly.
Presign Url Generation works fine but when I upload my file from Postman the file it gives me this error:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MissingFields</Code>
<Message>Missing fields in request.</Message>
<Key>records/marbles.jpg</Key>
<BucketName>bucket</BucketName>
<Resource>/bucket/records/marbles.jpg</Resource>
<RequestId>16E442AB40F8A81F</RequestId>
<HostId>0149bd16-e056-4def-ba82-2e91346c807c</HostId>
</Error>
The request seems to contain the required headers as mentioned in this thread:
the headers are:
and I also select the file properly in postman(Body>binary>select file) :
The code I use for presigned url generation is:
import { getSignedUrl } from '#aws-sdk/s3-request-presigner';
import { PutObjectCommand, S3Client } from '#aws-sdk/client-s3';
const s3Client = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
endpoint: http://172.21.0.2:9000,
forcePathStyle: true,
});
const bucketParams = {
Bucket: 'myBucket',
Key: `marbles.jpg`,
};
const command = new PutObjectCommand(bucketParams);
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 10000,
})
I was trying and changing ports, and the put command seems to work when I use only local host for url generation
so, in this above:
new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
endpoint: http://172.21.0.2:9000,
forcePathStyle: true,
});
I use:
new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
endpoint: http://172.21.0.2, // or 127.0.0.1
forcePathStyle: true,
});
Note, I haven't used any port number, so the default is 80
If you're using docker-compose add this config:
.
.
.
ports:
- 80:9000
and it works fine.
In my AWS S3 bucket, I have the same file with different versions. How can I generate a pre-signed URL for each version of the same file?
I am using NodeJS AWS SDK.
Try the following code to get the pre-signed URL for a specific version of an object in AWS S3 Bucket using NodeJS AWS SDK:
const aws = require('aws-sdk');
const AWS_SIGNATURE_VERSION = 'v4';
const s3 = new aws.S3({
accessKeyId: <aws-access-key>,
secretAccessKey: <aws-secret-access-key>,
region: <aws-region>,
signatureVersion: AWS_SIGNATURE_VERSION
});
const url = s3.getSignedUrl('getObject', {
Bucket: <aws-s3-bucket-name>,
Key: <aws-s3-object-key>,
VersionId: <aws-s3-object-version-id>
Expires: <url-expiry-time-in-seconds>
})
console.log(url)
Note: Don't forget to replace the placeholders (<...>) with actual values to make it work.
I have a code for uploading files to AWS S3 bucket:
var upload = multer({
storage: multerS3({
s3: s3,
bucket: 'mybucketname',
key: function (req, file, cb) {
cb(null, Date.now().toString())
}
}),
fileFilter: myfilefiltergeshere...
})
I want to download the uploaded source. I don't know how could it be done, because I do not really know, how to identify the file on S3. Is it the key field in the upload, or is it something else I have to specify?
For download you can
import AWS from 'aws-sdk'
AWS.config.update({
accessKeyId: '....',
secretAccessKey: '...',
region: '...'
})
const s3 = new AWS.S3()
async function download (filename) {
const { Body } = await s3.getObject({
Key: filename,
Bucket: 'mybucketname'
}).promise()
return Body
}
const dataFiles = await Promise.all(files.map(file => download(file)))
I have files in an array that's why i used files.map, but i guess you can look at the code for some guidance types, this might help you.
And for more you can read this.
I am using the NodeJS AWS SDK to generate a presigned S3 URL. The docs give an example of generating a presigned URL.
Here is my exact code (with sensitive info omitted):
const AWS = require('aws-sdk')
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'id-omitted', secretAccessKey: 'key-omitted'})
// Tried with and without this. Since s3 is not region-specific, I don't
// think it should be necessary.
// AWS.config.update({region: 'us-west-2'})
const myBucket = 'bucket-name'
const myKey = 'file-name.pdf'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
The URL that generates looks like this:
https://bucket-name.s3-us-west-2.amazonaws.com/file-name.pdf?AWSAccessKeyId=[access-key-omitted]&Expires=1470666057&Signature=[signature-omitted]
I am copying that URL into my browser and getting the following response:
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>[bucket-name-omitted]</BucketName>
<RequestId>D1A358D276305A5C</RequestId>
<HostId>
bz2OxmZcEM2173kXEDbKIZrlX508qSv+CVydHz3w6FFPFwC0CtaCa/TqDQYDmHQdI1oMlc07wWk=
</HostId>
</Error>
I know the bucket exists. When I navigate to this item via the AWS Web GUI and double click on it, it opens the object with URL and works just fine:
https://s3-us-west-2.amazonaws.com/[bucket-name-omitted]/[file-name-omitted].pdf?X-Amz-Date=20160808T141832Z&X-Amz-Expires=300&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Signature=[signature-omitted]&X-Amz-Credential=ASIAJKXDBR5CW3XXF5VQ/20160808/us-west-2/s3/aws4_request&X-Amz-SignedHeaders=Host&x-amz-security-token=[really-long-key]
So I am led to believe that I must be doing something wrong with how I'm using the SDK.
Dustin,
Your code is correct, double check following:
Your bucket access policy.
Your bucket permission via your API key.
Your API key and secret.
Your bucket name and key.
Since this question is very popular and the most popular answer is saying your code is correct, but there is a bit of problem in the code which might lead a frustrating problem. So, here is a working code
AWS.config.update({
accessKeyId: ':)))',
secretAccessKey: ':DDDD',
region: 'ap-south-1',
signatureVersion: 'v4'
});
const s3 = new AWS.S3()
const myBucket = ':)))))'
const myKey = ':DDDDDD'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
});
console.log(url);
The noticeable difference is the s3 object is created after the config update, without this the config is not effective and the generated url doesn't work.
Here is the complete code for generating pre-signed (put-object) URL for any type of file in S3.
If you want you can include expiration time using Expire parameter in parameter.
The below code will upload any type of file like excel(xlsx, pdf, jpeg)
const AWS = require('aws-sdk');
const fs = require('fs');
const axios = require('axios');
const s3 = new AWS.S3();
const filePath = 'C:/Users/XXXXXX/Downloads/invoice.pdf';
var params = {
Bucket: 'testing-presigned-url-dev',
Key: 'dummy.pdf',
"ContentType": "application/octet-stream"
};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
fs.writeFileSync("./url.txt", url);
axios({
method: "put",
url,
data: fs.readFileSync(filePath),
headers: {
"Content-Type": "application/octet-stream"
}
})
.then((result) => {
console.log('result', result);
}).catch((err) => {
console.log('err', err);
});
});
I had a use case where using node.js ; I wanted to get object from s3 and download it to some temp location and then give it as attachment to third-party service! This is how i broke the code:
get signed url from s3
make rest call to get object
write that into local location
It may help anyone; if there is same use case; chekout below link;
https://medium.com/#prateekgawarle183/fetch-file-from-aws-s3-using-pre-signed-url-and-store-it-into-local-system-879194bfdcf4
For me, I was getting a 403 because the IAM role I had used to get the signed url was missing the S3:GetObject permission for the bucket/object in question. Once I added this permission to the IAM role, the signed url began to work correctly afterwards.
Probably not the answer you are looking for, But it turned our I swapped AWS_ACCESS_KEY_ID with AWS_SECRET_ACCESS_KEY
for future visitors, you might want to double check that.
Try this function with promise.
const AWS = require("aws-sdk");
const s3 = new AWS.S3({
accessKeyId: 'AK--------------6U',
secretAccessKey: 'kz---------------------------oGp',
Bucket: 'bucket-name'
});
const getSingedUrl = async () => {
const params = {
Bucket: 'bucket_name',
Key: 'file-name.pdf',
Expires: 60 * 5
};
try {
const url = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(url)
} catch (err) {
if (err) {
console.log(err)
}
}
}
getSingedUrl()
Solved:
I want to get a signed URL from my amazon S3 server. I am new to AWS. where do i set my secret-key and access_id_key so that S3 identifies request from my server.
var express=require('express');
var app=express();
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket: 'my-bucket', Key: 'path/to/key', Expiration: 20}
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url)
})
app.listen(8000)
You can also set the credentials for each bucket if you are working with multiple buckets, you just need to pass the credentials into the constructor of the S3 object, like so:
var AWS = require('aws-sdk');
var credentials = {
accessKeyId: AWS_CONSTANTS.S3_KEY,
secretAccessKey: AWS_CONSTANTS.S3_SECRET,
region: AWS_CONSTANTS.S3_REGION
};
var s3 = new AWS.S3(credentials);
var params = {Bucket:'bucket-name', Key: 'key-name', Expires: 20};
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
});
Later i solved my issue.
This was pretty helpful
http://aws.amazon.com/sdkfornodejs/ Moreover you can hardcode your credentials also as
var express=require("express");
var app=express();
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket:'your-bucket-name on s3', Key: 'key-name-on s3 you want to store under', Expires: 20}
AWS.config.update({accessKeyId: 'Your-Access-Key-Id', secretAccessKey:
'Your-secret-key'});
AWS.config.region = 'us-west-2';
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
});
app.listen(8000);