Retrieve S3 URL of a file uploaded using Paperclip Rails from Node.js - node.js

I have an image uploaded to S3 bucket using paperclip gem from a rails app.
Is there any way to retrieve the URL of that image from Node.js since both the app are using the shared DB?
EDIT
To clarify further, file-name of the images uploaded to S3 are stored in the file_name column of the table. In rails app, the instance of table-model can return the exact URL using the S3 configs specified in paperclip.rb.
For e.g., https://s3-region.amazonaws.com/bucket-name/table/column/000/000/345/thumb/file-name.webp?1655104806
where 345 is the PK of the table

Related

HTML to PDF creation in AWS Lambda using Python

I am trying to create a pdf file that contains images, tables from HTML data in AWS lambda using python. I searched a lot on google and I didn't find any super cool solution. I tried some libraries in local(FPDF, pdfKit) and but it doesn't work on AWS. Is there any simple tool to create pdf and upload it to S3 bucket. Thanks in advance.
you can use reportlab PDF python module. It is good for all the things you have asked for. You can add images, create tables etc. There are a lot of styling options available as well. You can find more about it here: https://www.reportlab.com/docs/reportlab-userguide.pdf
I am using this is in my production and works pretty well for my use case where I have to create an invoice. You can create the invoice in the /tmp directory and then upload this to S3
pdfkit library works with aws lambda. pdfkit internally needs the wkhtmltopdf binaries installed, you can add them as lambda layer. You can download files from https://wkhtmltopdf.org/downloads.html.
Once you add the lambda layers you can set the config path as following:
config = pdfkit.configuration(wkhtmltopdf="/opt/bin/wkhtmltopdf")
pdfkit.from_string/from_file(input, <temp path of pdf file on lambda>, configuration=config)
You can uplod the file generated in your lambda temp location to S3 bucket using upload_file(). You can refer https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.upload_file on how to upload to s3 bucket.

Jimp write image to Google cloud storage node js

I'm currently writing images to my filesystem using jimp.
For example: image.write('path')
This cannot be used on Google App Engine because its read only filesystem.
How can I write this to a Google storage bucket? I've tried writestreams but keep getting read only errors so I feel like jimp is still writing to the drive.
Thanks heaps
As I can understand you are modifying some images on your App Engine and you want to upload them to a bucket but you didn't mention if you are using standard or flex environment. In order to do this you want to make your publicly readable so it can serve files.
Following this Google Cloud Platform Node.js documentation you can see that to upload a file to a bucket you need to create object first using:
const blob = bucket.file(yourFileName);
Then using createWriteStream you can upload the file

How to delete a file in Firebase Storage using firebase admin sdk

I''m developing an android app, a web server using flask and firebase.
When client app uploads an image file, the web server saves an image url in database.
And then, client app gets the image url and open image from firebase storage.
So my web server, and app don't know file name.
However, to delete a file in storage,
Knowing the file's name is needed.
Only what I can get is file url.
How can my web server delete a file using file url?
The following code is for deleteting file using filename.
I wanna change this code to deleteting file using file url.
bucket = storage.bucket('<BUCKET_NAME>')
def deleteFile(imageName):
try:
bucket.delete_blob('profile_images/' + imageName)
bucket.delete()
return True
except Exception as e:
print(e)
return False
The Storage client in the Firebase Admin SDK is a thin wrapper around the Node.js client for Google Cloud Storage. And as far as I know the latter doesn't have a way to map the download URL back to a File.
This means that you'll have to find the path from the client using FirebaseStorage.getReferenceFromUrl(), and then pass that path to your web server. That way the JavaScript code can use the path to create a reference to the File.

Is it possible to upload a server file to cloud using multer?

I want to upload a file which is being generated on my server conditionally in a directory, and I am using multer-s3 package to upload files to S3 service.
Is it possible to upload that generated file from server directory to S3 using multer-s3?
No, it is not possible to upload the generated file to s3 using multer-s3, because multer-s3 library has been designed and written in a way to provide alternative storage engine to multer, not as a library to upload file to s3. You can use some other library to upload files to s3 or you can read it here https://github.com/badunk/multer-s3/blob/master/index.js#L172

Amazon S3 and Cloudfront - Publish file uploaded as hashed filename

Technologies:
Python3
Boto3
AWS
I have a project built using Python3 and Boto3 to communicate with a bucket in Amazon S3 service.
The process is that a user posts images to the service; these' images are uploaded to an S3 bucket, and can be served through amazon cloudfront using a hashed file name instead of the real file name.
Example:
(S3) Upload key: /category-folder/png/image.png
(CloudFront) Serve: http://d2949o5mkkp72v.cloudfront.net/d824USNsdkmx824
I want to file uploaded to S3, appear as hash number as file name in cloudfront server.
Does anyone have knowledge that makes S3 or cloudfront automatically convert and publish a file-name to a hash name.
In order to suffice my needs I created the fields needed to maintain the keys (to make them unique; both on S3 and in my mongodb)
Fields:
original_file_name = my_file_name
file_category = my_images, children, fun
file_type = image, video, application
key = uniqueID
With the mentioned fields; then one can check if the key exists by simply searching for the key, the new file_name, the category, and the type; if it exists in the database then file exists.
To generate the unique id:
def get_key(self):
from uuid import uuid1
return uuid1().hex[:20]
This limits the ID to the length of 20 characters.

Resources