I'm currently writing images to my filesystem using jimp.
For example: image.write('path')
This cannot be used on Google App Engine because its read only filesystem.
How can I write this to a Google storage bucket? I've tried writestreams but keep getting read only errors so I feel like jimp is still writing to the drive.
Thanks heaps
As I can understand you are modifying some images on your App Engine and you want to upload them to a bucket but you didn't mention if you are using standard or flex environment. In order to do this you want to make your publicly readable so it can serve files.
Following this Google Cloud Platform Node.js documentation you can see that to upload a file to a bucket you need to create object first using:
const blob = bucket.file(yourFileName);
Then using createWriteStream you can upload the file
Related
I am creating an application in nodejs/typescript that uses Firebase Functions and I basically need to upload a JSON object to a bucket. I am having issues because the JSON I am creating exists in memory, and is not an actual file - as the application is a serverless one.
I know firebase is just a wrapper for Google Cloud Functions and have looked for solutions everywhere however cannot seem to get this working. Is anyone able to give me any guidance or suggestions please?
If I cannot upload the in-memory file to a bucket, does anyone know if its possible to programatically export a database document as a json to a firestore bucket using firebase? (as I can easily just upload the json to a database document).
Below is one example of what I have tried. However the code is obviously invalid.
await storage()
.bucket()
.file('test.json') // A random string filename and not an existing file
.createWriteStream()
.write(JSON.stringify(SOME_VALID_JSON))
Thanks!
You can use save() to write data in memory to a bucket.
await storage()
.bucket()
.file('test.json')
.save(JSON.stringify(SOME_VALID_JSON))
I am trying to create a pdf file that contains images, tables from HTML data in AWS lambda using python. I searched a lot on google and I didn't find any super cool solution. I tried some libraries in local(FPDF, pdfKit) and but it doesn't work on AWS. Is there any simple tool to create pdf and upload it to S3 bucket. Thanks in advance.
you can use reportlab PDF python module. It is good for all the things you have asked for. You can add images, create tables etc. There are a lot of styling options available as well. You can find more about it here: https://www.reportlab.com/docs/reportlab-userguide.pdf
I am using this is in my production and works pretty well for my use case where I have to create an invoice. You can create the invoice in the /tmp directory and then upload this to S3
pdfkit library works with aws lambda. pdfkit internally needs the wkhtmltopdf binaries installed, you can add them as lambda layer. You can download files from https://wkhtmltopdf.org/downloads.html.
Once you add the lambda layers you can set the config path as following:
config = pdfkit.configuration(wkhtmltopdf="/opt/bin/wkhtmltopdf")
pdfkit.from_string/from_file(input, <temp path of pdf file on lambda>, configuration=config)
You can uplod the file generated in your lambda temp location to S3 bucket using upload_file(). You can refer https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.upload_file on how to upload to s3 bucket.
Can I use lambda to compress images under a bucket?
I can get the images under a particular bucket visa listObject. How do you compress these returns and write it in another bucket?
Yes, you can absolutely use lambda. Try this library: aws-lambda-image-compressor
AWS lambda function to compress and resize images
This is a Lambda Function which resizes/reduces images automatically. When an image is put on some AWS S3 bucket, this function will resize/reduce it and save it into a new bucket. I have used it in the past and I loved it.
Usage
edit lambda-config.js file and assign name, description, memory size, timeout of your lambda function.
edit .env file with your AWS access data
npm install
gulp deploy
You can also try this other library which is more popular in the crowd - aws-lambda-image
If you really want to create something of your own and want a good start.
I would recommend these 2 articles that explain it very well -
Image conversion using Amazon Lambda and S3 in Node.js
Automating Image Compression Using S3 & Lambda
If you are fine to use Amazon API Gateway then u can follow this AWS Compute Blog -
Resize Images on the Fly with Amazon S3, AWS Lambda, and Amazon API Gateway
Hope this was useful.
I want to upload a file which is being generated on my server conditionally in a directory, and I am using multer-s3 package to upload files to S3 service.
Is it possible to upload that generated file from server directory to S3 using multer-s3?
No, it is not possible to upload the generated file to s3 using multer-s3, because multer-s3 library has been designed and written in a way to provide alternative storage engine to multer, not as a library to upload file to s3. You can use some other library to upload files to s3 or you can read it here https://github.com/badunk/multer-s3/blob/master/index.js#L172
I know that it's possible to upload files to my cloud-files account in Node.js, using the following module: node-cloudfiles.
But is it also possible to upload a filestream directly?
In my case I am dowloading an image from a certain location in Node.js and want to upload this directly to my cloud-files account without saving the image temporary on my server.
Of course it is possible - you can just read the documentation on Rackspace Cloud Files API ( http://docs.rackspacecloud.com/files/api/cf-devguide-latest.pdf ) and implement the necessary parts yourself.
However, I'd suggest to wait until https://github.com/nodejitsu/node-cloudfiles/pull/11 gets integrated into the trunk - then node-cloudfiles library will support uploading files using streams so you won't have to create files before uploading.