How to abort uploading a stream to google storage in Node.js - node.js

Interaction with Cloud Storage is performed using the official Node.js Client library.
Output of an external executable (ffmpeg) through fluent-ffmpeg is piped to a writable stream of a Google Cloud Storage object using [createWriteStream].(https://googleapis.dev/nodejs/storage/latest/File.html#createWriteStream).
Executable (ffmpeg) can end with an error. In this case the file is created on Cloud Storage with 0 length.
I want to abort uploading on the command error to avoid finalizing an empty storage object.
What is the proper way of aborting the upload stream?
Current code (just an excerpt):
ffmpeg()
.input(sourceFile.createReadStream())
.output(destinationFile.createWriteStream())
.run();
Files are instances of https://cloud.google.com/nodejs/docs/reference/storage/latest/storage/file.

Related

How to set output as azure blob in FFMPEG command

In my Queue trigger Azure Function, I am resizing video using FFMPEG command.
subprocess.run(["ffmpeg", "-i",
"input.mp4",
"-filter_complex", "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:-1:-1,setsar=1,fps=25",
"-c:v","libx264",
"-c:a","aac",
"-preset:v", "ultrafast",
"output.mp4"],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
Currently output will be write locally and use azure function local space. What I want that output file directly write to azure container as a blob. And not save it to locally as function has less space and for resizing its required more space which causing issue.
How can I achieve this?
FFMPEG supports in-memory PIPEs for both the input and output. The in-memory data can be easily stored as a blob in the Blob storage

Trouble using etherblob for a task

I have been tasked to identify the first block in which a user (that has been actively uploading compressed archives of data on the Ethereum Rinkeby blockchain) has uploaded a blob of data between 2021-01-19 and 2021-01-21.
I was developing the following command: $etherblob -network rinkeby 7919090 7936368 --encrypted. Where should I run this command in order to get the correct output?

How do I use a cloud function to unzip a large file in cloud storage?

I have a cloud function which is triggered when a zip is uploaded to cloud storage and is supposed to unpack it. However the function runs out of memory, presumably since the unzipped file is too large (~2.2 Gb).
I was wondering what my options are for dealing with this problem? I read that it's possible to stream large files into cloud storage but I don't know how to do this from a cloud function or while unzipping. Any help would be appreciated.
Here is the code of the cloud function so far:
storage_client = storage.Client()
bucket = storage_client.get_bucket("bucket-name")
destination_blob_filename = "large_file.zip"
blob = bucket.blob(destination_blob_filename)
zipbytes = io.BytesIO(blob.download_as_string())
if is_zipfile(zipbytes):
with ZipFile(zipbytes, 'r') as myzip:
for contentfilename in myzip.namelist():
contentfile = myzip.read(contentfilename)
blob = bucket.blob(contentfilename)
blob.upload_from_string(contentfile)
Your target process is risky:
If you stream file without unzipping it totally, you can't validate the checksum of the zip
If you stream data into GCS, file integrity is not guaranteed
Thus, you have 2 successful operation without checksum validation!
Before having Cloud Function or Cloud Run with more memory, you can use Dataflow template to unzip your files

Lambda which reads jpg/vector files from S3 and processes them using graphicsmagick

We have a lambda which reads jpg/vector files from S3 and processes them using graphicsmagick.
This lambda was working fine till today. But since today morning we are getting errors while processing vector images using grahicsmagick.
"Error: Command failed: identify: unable to load module /usr/lib64/ImageMagick-6.7.8/modules-Q16/coders/ps.la': file not found # error/module.c/OpenModule/1278.
identify: no decode delegate for this image format/tmp/magick-E-IdkwuE' # error/constitute.c/ReadImage/544."
The above error is occurring for certain .eps files (vector) while using the identify function of the gm module.
Could you please share your insights on this.
Please let us know whether any backend changes have gone through with the aws end for Imagemagick module recently which might have had an affect on this lambda.

How to upload files larger than 10mb via google cloud http functions. ? Any alternative options?

I have made a google cloud function to upload the file into google bucket and returns signed URL in response.
Whenever large files (more than 10mb) uploaded. It is not working.
It works fine for files less than 10mb.
I have searched and see in cloud documentation. It says max data sent size is 10mb for HTTP functions not allowed to increase size.
resource: {…}
severity: "ERROR"
textPayload: "Function execution could not start, status: 'request too large'"
timestamp: "2019-06-25T06:26:41.731015173Z"
for successful file upload, it gives below log
Function execution took 271 ms, finished with status code: 200
for large files, it gives below log
Function execution could not start, status: 'request too large'
Are there any alternative options to upload file in the bucket using API? Any different service would be fine. I need to upload file up to 20mb files. Thanks in advance
You could upload directly to a Google Cloud Storage bucket using the Firebase SDK for web and mobile clients. Then, you can use a Storage trigger to deal with the file after it's finished uploading.

Resources