I am processing a video file using ffmpeg, in AWS lambda. My file size is 1gb. I don't want to store the processed file in /tmp folder (or efs).
Is there a way I can directly download the output file to the user's desktop, from AWS lambda?
Related
I'm going to upload the file through node js There are many ways to upload image files on the Internet. However, there is no way to upload video files or voice files. Is uploading such video files or voice files the same as image files? Do I just put the voice file or video file in the space where the image file goes?
I store M4A audio files on GCS. When uploading the files manually from the GCP Console, the file type on GCS is x-m4a and you can click on the file and play it (if it's small enough). If I upload the same file using gsutil cp, however, the resulting file type is mp4a-latm and, if you click on it, GCS doesn't offer the audio player widget. The files are the same size. I run gsutil on a Mac Catalina. I'd like to use gsutil and play the audio in the Console. Anyone knows why the difference and how to adjust gsutil's behavior? Many thanks Example: two different types on GCS; this is the same file on my laptop
I am using Youtube-dl to download transcript, it works fine on my machine (local server) where I provide the __Dirname into the Options params to upload files. But I want to use Google Cloud functions, so how can I substitute __dirname with Cloud storage ??
Thank you !!
Upload from Youtube-dl it's not possible. To upload files into Google Cloud storage is possible if you upload a file already in your disk.
You will need to download the file from whichever program you mention (as mentioned in the comments, you can download it to a temporal folder), upload the file to GCS and then delete it from your temporal folder.
What you can actually do? you can for example run a script inside of a Google Cloud Instance with a gsutil command to upload the files into a bucket.
I am working on an automation piece where I need to download all files from a folder inside a S3 bucket irrespective of the file name. I understand that the using boto3 in python I can download a file like:
s3BucketObj = boto3.client('s3', region_name=awsRegion, aws_access_key_id=s3AccessKey, aws_secret_access_key=s3SecretKey)
s3BucketObj.download_file(bucketName, "abc.json", "/tmp/abc.json")
but I was then trying to download all files irrespective of what filename to be specified in this way:
s3BucketObj.download_file(bucketName, "test/*.json", "/test/")
I know the syntax above could be totally wrong but is there a simple way to do that?
I did find a thread which helps here but seems a bit complex: Boto3 to download all files from a S3 Bucket
There is no API call to Amazon S3 that can download multiple files.
The easiest way is to use the AWS Command-Line Interface (CLI), which has aws s3 cp --recursive and aws s3 sync commands. It will do everything for you.
If you choose to program it yourself, then Boto3 to download all files from a S3 Bucket is a good way to do it. This is because you need to do several things:
Loop through every object (there is no S3 API to copy multiple files)
Create a local directory if it doesn't exist
Download the object to the appropriate local directory
The task can be made simpler if you do not wish to reproduce the directory structure (eg if all objects are in the same path). In that case, you can simply loop through the objects and download each of them to the same directory.
I have an encrypted zip file on AWS S3 with one xml file inside. I am streaming that to my node.js heroku app and I need to unzip (with password) and stream the xml file through my SAX parser. I've got everything downpat with my SAX parser. The problem is getting the XML file out of the ZIP file using the password.
There seems to be plenty of decent libraries for node that allows you to unzip files. However, none of them support unzipping a zip file that is AES encrypted or encrypted period. At least, not to my find and I've spent a few hours researching this.
I would prefer to stream the zipfile and contents for sake of speed. Right now looks my only option I can find is to unzip the file via command line execution through node. I would not prefer to do this mainly because I can't find a way to stream the file via command line.