PermanentRedirect while generating pre signed url - node.js

I am having an issue while creating a pre signed url from aws s3 using aws-sdk in nodejs. It gives me PermanentRedirect The bucket you are attempting to access must be addressed using the specified endpoint.
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'test123', secretAccessKey: 'test123'})
AWS.config.update({region: 'us-east-1'})
const myBucket = 'test-bucket'
const myKey = 'test.jpg'
const signedUrlExpireSeconds = 60 * 60
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
How I can remove this error to get pre signed url working. Also I need to know what is a purpose of Key.

1st - what is your region of the bucket? S3 is global service yet each bucket has region, while creating the bucket you must select it.
2nd - when working with S3 not in N.Virginia region there could be situations when internal aws SSL/DNS is not in sync yet. I had this issue multiple times, can't find exact docs related to this but issue is from nature of redirects, not found or no access. Then after 4-12h it starts to just work. What i happen to dig out about these issues is something related to internal AWS SSL/DNS related to S3 buckets that are not in n.virginia region. So could be it.
3rd - If you re-created buckets multiple times and re-using same name. Bucket name is global, even if bucket is regional. So could be again related to 2nd scenarios when previously within last 24h bucket was actually on different region and now AWS's internal DNS/SSL haven't synced yet.
p.s. Key is object's key, any object inside bucket has key. On aws console you can navigate "key" which looks like path to file, but it's not a path to file. S3 has no concept of directories like hard drives. Any path to file is a key of object. AWS console just splits object's key by / and displays as directories to have better UX while navigating the UI.

Related

How to get the file path in AWS Lambda?

I would like to send a file to Google Cloud Platform using their client library such on this this example (Node.js code sample): https://cloud.google.com/storage/docs/uploading-objects
My current code looks like this:
const s3Bucket = 'bucket_name';
const s3Key = 'folder/filename.extension';
const filePath = s3Bucket + "/" + s3Key;
await storage.bucket(s3Bucket).upload(filePath, {
gzip: true,
metadata: {
cacheControl: 'public, max-age=31536000',
},
});
But when I do this there is an error:
"ENOENT: no such file or directory, stat
'ch.ebu.mcma.google.eu-west-1.ibc.websiteExtract/AudioJobResults/audioGoogle.flac'"
I also tried to send the path I got in AWS Console (Copy path button) "s3://s3-eu-west-1.amazonaws.com/ch.ebu.mcma.google.eu-west-1.ibc.website/ExtractAudioJobResults/audioGoogle.flac", but did not work.
You seem to be trying to copy data from S3 to Google Cloud Storage directly. This is not what your example/tutorial shows. The sample code assumes that you upload a local copy of the data to Google Cloud Storage. S3 is not local storage.
How you could do it:
Download the data to /tmp in your Lambda function
Use the sample code above to upload the data from /tmp
(Optionally) Remove the uploaded data from /tmp
A word of caution: The available storage under /tmp is currently limited to 500MB. If you want to upload/copy files larger than that this won't work. Also beware that the lambda execution environment might be re-used so cleaning up after yourself (i.e. step 3) is probably a good idea if you plan to copy lots of files.

aws lambda#edge + Cloudfront ERROR ()

I'm using lambda#edge + cloudfront to do some image resizes etc. My origin is S3 bucket.
ISSUE: When I try to call for an object inside s3 via cloudfront over browser I get the above error (picture). it even happens when I use just a test function(below).
how I call/query it: My s3 is set as origin, so I just use my cloudfront Domain Name d5hbjkkm17mxgop.cloudfront.net and add s3 path /my_folder/myimage.jpg
browser url used: d5hbjkkm17mxgop.cloudfront.net/my_folder/myimage.jpg
exports.handler = (event, context, callback) => {
var request = event.Records[0].cf.request;
console.log(event);
console.log("\n\n\n");
console.log(request);
callback(null, request);
};
I'm pretty sure that request is an object - have no idea why is this happening.
If testing in aws console all works - so it has to be an cloudfront/lambda interface error - because lambda is not even invoked (no new log entrie being generated).
I also have an access error from cloudfront:
2018-01-08 12:40:20 CDG50 855 62.65.189.38 GET d3h4fd56s4fs65d4f6somxgyh.cloudfront.net /nv1_andrej_fake_space/98f741e0b87877c607a6ad0d2b8af7f3ba2f949d7788b07a9e89453043369196 502 - Mozilla/5.0%2520(X11;%2520Ubuntu;%2520Linux%2520x86_64;%2520rv:57.0)%2520Gecko/20100101%2520Firefox/57.0 - - LambdaValidationError usnOquwt7A0R7JkFD3H6biZp21dqnWwC5szU6tHxKxcHv5ZAU_g6cg== d3hb8km1omxgyh.cloudfront.net https 260 0.346 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 LambdaValidationError HTTP/2.0
Any ideas?
EDITED: semicolon
Do not forget to Publish new version of your Lambda. It is not sufficient to save it. Lambda that was last published is the one actually deployed, however you may have different code in aws console window.
EDIT: another gotcha - do not forget to change your function version in CloudFront settings. You have to select desired CF distribution that is bind to your Lambda. Choose that CF distribution, then go to behaviors, choose edit behaviors. Scroll down and last entry is Lambda Function Associations (see pic below)
Last number in Lambda Function ARN is the version number of deployed lambda.

How to restore object from amazon glacier to s3 using nodejs code?

I have configured life cycle policy in S3, some of objects in S3 are stored in Glacier class, some of object are still in S3, now I am trying to restore objects from Glacier, I can restore objects in glacier using intiate restore in console and s3cmd line.How can i write code to restore objects in Glacier by using by Nodejs AWS SDK.
You would use the S3.restoreObject() function in the AWS SDK for NodeJS to restore an object from Glacier, as documented here.
Thanks mark for update.I have tried using s3.restoreObject() and code is working.But i am facing following issue :{ [MalformedXML: The XML you provided was not well-formed or did not validate against out published schema}
This is code i tried:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({accessKeyId: 'XXXXXXXX', secretAccessKey:'XXXXXXXXXX'});
var params = {
Bucket: 'BUCKET',
Key: 'file.json',
RestoreRequest:
{ Days: 1, 
 GlacierJobParameters: { Tier: 'Standard'  }
} 
};
s3.restoreObject (params, function(err, data)
{ 
if (err) console.log(err, err.stack); 
else console.log(data);  
});

Amazon S3 and Cloudfront - Publish file uploaded as hashed filename

Technologies:
Python3
Boto3
AWS
I have a project built using Python3 and Boto3 to communicate with a bucket in Amazon S3 service.
The process is that a user posts images to the service; these' images are uploaded to an S3 bucket, and can be served through amazon cloudfront using a hashed file name instead of the real file name.
Example:
(S3) Upload key: /category-folder/png/image.png
(CloudFront) Serve: http://d2949o5mkkp72v.cloudfront.net/d824USNsdkmx824
I want to file uploaded to S3, appear as hash number as file name in cloudfront server.
Does anyone have knowledge that makes S3 or cloudfront automatically convert and publish a file-name to a hash name.
In order to suffice my needs I created the fields needed to maintain the keys (to make them unique; both on S3 and in my mongodb)
Fields:
original_file_name = my_file_name
file_category = my_images, children, fun
file_type = image, video, application
key = uniqueID
With the mentioned fields; then one can check if the key exists by simply searching for the key, the new file_name, the category, and the type; if it exists in the database then file exists.
To generate the unique id:
def get_key(self):
from uuid import uuid1
return uuid1().hex[:20]
This limits the ID to the length of 20 characters.

Amazon S3 PUT throws "SignatureDoesNotMatch"

This AWS security stuff is driving me nuts. I'm trying to upload some binary files from a node app using knox. I keep getting the infamous SignatureDoesNotMatch error with my key/secret combination. I traced it down to this: with e.g. Transmit, I can access the bucket by connecting to s3.amazonaws.com, but I cannot access it via the virtual subdomain mybucket.s3.amazonaws.com. (When I try to access the bucket with the s3.amazonaws.com/mybucket syntax, I get an error saying that only the subdomain style is allowed.)
I have tried setting the bucket policy to explicitly allow PUT from the respective user, but that had no effect. Can anyone please shed some light on how I can enable uploading of files from one specific AWS user?
After a lot of trial and error, I narrowed it down to a couple of issues. I'm not entirely sure which one ultimately fixed it, but here are a few things you might want to try:
make sure you are setting the right datacenter. In my case, this looked like this:
knox.createClient({
key: this.config.key
, secret: this.config.secret
, bucket: this.config.bucket
, region: 'us-west-2' // cause my bucket is supposed to be in oregon
});
Check your PUT headers. In my case, the Content-Type was accidentally set to undef which caused issues:
var headers = {
'x-amz-acl': 'public-read' // if you want anyone to be able to download the file
};
if (filesize) headers['Content-Length'] = filesize;
if (mime) headers['Content-Type'] = mime;

Resources