How to restore object from amazon glacier to s3 using nodejs code? - node.js

I have configured life cycle policy in S3, some of objects in S3 are stored in Glacier class, some of object are still in S3, now I am trying to restore objects from Glacier, I can restore objects in glacier using intiate restore in console and s3cmd line.How can i write code to restore objects in Glacier by using by Nodejs AWS SDK.

You would use the S3.restoreObject() function in the AWS SDK for NodeJS to restore an object from Glacier, as documented here.

Thanks mark for update.I have tried using s3.restoreObject() and code is working.But i am facing following issue :{ [MalformedXML: The XML you provided was not well-formed or did not validate against out published schema}
This is code i tried:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({accessKeyId: 'XXXXXXXX', secretAccessKey:'XXXXXXXXXX'});
var params = {
Bucket: 'BUCKET',
Key: 'file.json',
RestoreRequest:
{ Days: 1, 
 GlacierJobParameters: { Tier: 'Standard'  }
} 
};
s3.restoreObject (params, function(err, data)
{ 
if (err) console.log(err, err.stack); 
else console.log(data);  
});

Related

Firebase upload raw data to bucket

I am creating an application in nodejs/typescript that uses Firebase Functions and I basically need to upload a JSON object to a bucket. I am having issues because the JSON I am creating exists in memory, and is not an actual file - as the application is a serverless one.
I know firebase is just a wrapper for Google Cloud Functions and have looked for solutions everywhere however cannot seem to get this working. Is anyone able to give me any guidance or suggestions please?
If I cannot upload the in-memory file to a bucket, does anyone know if its possible to programatically export a database document as a json to a firestore bucket using firebase? (as I can easily just upload the json to a database document).
Below is one example of what I have tried. However the code is obviously invalid.
await storage()
.bucket()
.file('test.json') // A random string filename and not an existing file
.createWriteStream()
.write(JSON.stringify(SOME_VALID_JSON))
Thanks!
You can use save() to write data in memory to a bucket.
await storage()
.bucket()
.file('test.json')
.save(JSON.stringify(SOME_VALID_JSON))

How to get the file path in AWS Lambda?

I would like to send a file to Google Cloud Platform using their client library such on this this example (Node.js code sample): https://cloud.google.com/storage/docs/uploading-objects
My current code looks like this:
const s3Bucket = 'bucket_name';
const s3Key = 'folder/filename.extension';
const filePath = s3Bucket + "/" + s3Key;
await storage.bucket(s3Bucket).upload(filePath, {
gzip: true,
metadata: {
cacheControl: 'public, max-age=31536000',
},
});
But when I do this there is an error:
"ENOENT: no such file or directory, stat
'ch.ebu.mcma.google.eu-west-1.ibc.websiteExtract/AudioJobResults/audioGoogle.flac'"
I also tried to send the path I got in AWS Console (Copy path button) "s3://s3-eu-west-1.amazonaws.com/ch.ebu.mcma.google.eu-west-1.ibc.website/ExtractAudioJobResults/audioGoogle.flac", but did not work.
You seem to be trying to copy data from S3 to Google Cloud Storage directly. This is not what your example/tutorial shows. The sample code assumes that you upload a local copy of the data to Google Cloud Storage. S3 is not local storage.
How you could do it:
Download the data to /tmp in your Lambda function
Use the sample code above to upload the data from /tmp
(Optionally) Remove the uploaded data from /tmp
A word of caution: The available storage under /tmp is currently limited to 500MB. If you want to upload/copy files larger than that this won't work. Also beware that the lambda execution environment might be re-used so cleaning up after yourself (i.e. step 3) is probably a good idea if you plan to copy lots of files.

How to store files in firebase using node.js

I have a small assignment where I will have a URL to a document or a file like google drive link or dropbox link.
I have to use this link to store that file or doc in firebase using nodejs. How should i start?
Little head's up might help. What should i use? Please help I'm stuck here.
The documentation for using the admin SDK is mostly covered in GCP documentation.
Here's a snippet of code that shows how you could upload a image directly to Cloud Storage if you have a URL for it. Any public link works, whether it's shared from Dropbox or somewhere else on the internet.
Edit 2020-06-01 The option to upload directly from URL was dropped in v2.0 of the SDK (4 September 2018): https://github.com/googleapis/nodejs-storage/releases/tag/v2.0.0
const fileUrl = 'https://www.dropbox.com/some/file/download/link.jpg';
const opts = {
destination: 'path/to/file.jpg',
metadata: {
contentType: 'image/jpeg'
}
};
firebase.storage().bucket().upload(fileUrl, opts);
This example is using the default bucket in your application and the opts object provides file upload options for the API call.
destination is the path that your file will be uploaded to in Google Cloud Storage
metadata should describe the file that you're uploading (see more examples here)
contentType is the file MIME type that you are uploading

PermanentRedirect while generating pre signed url

I am having an issue while creating a pre signed url from aws s3 using aws-sdk in nodejs. It gives me PermanentRedirect The bucket you are attempting to access must be addressed using the specified endpoint.
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'test123', secretAccessKey: 'test123'})
AWS.config.update({region: 'us-east-1'})
const myBucket = 'test-bucket'
const myKey = 'test.jpg'
const signedUrlExpireSeconds = 60 * 60
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
How I can remove this error to get pre signed url working. Also I need to know what is a purpose of Key.
1st - what is your region of the bucket? S3 is global service yet each bucket has region, while creating the bucket you must select it.
2nd - when working with S3 not in N.Virginia region there could be situations when internal aws SSL/DNS is not in sync yet. I had this issue multiple times, can't find exact docs related to this but issue is from nature of redirects, not found or no access. Then after 4-12h it starts to just work. What i happen to dig out about these issues is something related to internal AWS SSL/DNS related to S3 buckets that are not in n.virginia region. So could be it.
3rd - If you re-created buckets multiple times and re-using same name. Bucket name is global, even if bucket is regional. So could be again related to 2nd scenarios when previously within last 24h bucket was actually on different region and now AWS's internal DNS/SSL haven't synced yet.
p.s. Key is object's key, any object inside bucket has key. On aws console you can navigate "key" which looks like path to file, but it's not a path to file. S3 has no concept of directories like hard drives. Any path to file is a key of object. AWS console just splits object's key by / and displays as directories to have better UX while navigating the UI.

Node AWS SDK: Updating Credentials After Initialization

According to the Node AWS SDK documentation, new objects take the AWS object's configuration when initialized, and updating the AWS object's configuration will not change an instantiated object's config, so it must be updated manually. The docs specifically say you can do this, but updating the instantiated object manually doesn't seem to work.
var AWS = require('aws-sdk')
, awsInstance;
AWS.config.update({region: 'us-west'});
awsInstance = new AWS();
awsInstance.config.update({region: 'us-east'});
awsInstance's region is still set to us-west. How do you update it after instantiating the object?
You can't update AWS global configuration from instance.
use
awsInstance = new AWS({region: 'us-east'});
when you create the instance

Resources