Renaming/moving a public file on Firebase Cloud Storage using Node.js - node.js

The documentation explains how to rename a file in Firebase Cloud Storage using Node.js. However, it turns out that after renaming a public file, it's not public anymore. Is it possible to make it public while moving it?
When uploading a file, it's possible to set the option predefinedAcl. Is there such an option in move()?

The API documentation for move() says that it is not an atomic operation. It's actually a combination of copy() and delete(). Given that implementation detail, and the lack of any alternatives in the API surface, it looks like your only option is to set the ACL on the destination file after you copy it with the SDK.

Related

How to convert Base64 to String in logic app inline code (Javascript)

Summary: Logic app inline code (which uses NodeJS) is missing Buffer class.
Detailed: I am trying to trigger a logic app when some content is pushed into SFTP. I want to add some meta-data and save the details in the cosmos DB.
The issue is, The name of the file is received as a base64 encoded string in the inline code and Buffer is not available to parse it.
I even tried to create a set variable step (and decode filename there) but I am unable
to pass this variable to the inline code step. (Not supported)
The final option would be to use cloud functions instead of inline code which I am trying to avoid.
Looking for a workaround for conversion.
Logic App error image
link to ms doc
Doesn't support require() statements
Doesn't work with variables
Inline code can only perform the simplest Javascript operations, we may not be able to use Buffer.
As for passing the base64 encoded string, you can put it in Compose first, and then pass it in the inline code.
I suggest you use the built-in base64 related methods in the Azure logic app first.
If this does not meet your needs, you can create an Azure function and then call it in the Azure logic app.

Can we read files from server path using any fs method in NodeJs

In my case I need to read file/icon.png from cloud storage/bucket which is a token base URL/path. Token resides in header of request.
I tried to use fs.readFile('serverpath') but it gave back error as 'ENOENT' i.e. 'No such file or directory' is existed, but file is existed on that path. So are these methods are eligible to make calls and read files from server or they work only with static path, if that is so then in my case how to read file from cloud bucket/server.
Here i need to pass that file-path to UI, to show this icon.
Use this lib to handle GCS operations.
https://www.npmjs.com/package/#google-cloud/storage
If you do need use fs, install https://cloud.google.com/storage/docs/gcs-fuse, mount bucket to your local filesystem, then use fs as you normally would.
I would like to complement Cloud Ace's answer by saying that if you have Storage Object Admin permission you can make the URL of the image public and use it like any other public URL.
If you don't want to make the URL public you can get temporary access to the file by creating a signed URL.
Otherwise, you'll have to download the file using the GCS Node.js Client.
I posted this as an answer as it is quite long to be a comment.

IFileProvider Azure File storage

I am thinking about implementing IFileProvider interface with Azure File Storage.
What i am trying to find in docs is if there is a way to send the whole path to the file to Azure API like rootDirectory/sub1/sub2/example.file or should that actually be mapped to some recursion function that would take path and traverse directories structure on file storage?
just want to make sure i am not missing something and reinvent the wheel for something that already exists.
[UPDATE]
I'm using Azure Storage Client for .NET. I would not like to mount anything.
My intentention is to have several IFileProviders which i could switch based on Environment and other conditions.
So, for example, if my environment is Cloud then i would use IFileProvider implementation that uses Azure File Services through Azure Storage Client. Next, if i have environment MyServer then i would use servers local file system. Third option would be environment someOther with that particular implementation.
Now, for all of them, IFileProvider operates with path like root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to get sub3 info/content or should the path be broken into individual directories and get reference/content for each step?
I hope that clears the question.
Now, for all of them, IFileProvider operates with path like ˙root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to getsub3` info/content or should the path be broken into individual directories and get reference/content for each step?
For access the specific subdirectory across multiple sub directories, you could use the GetDirectoryReference method for constructing the CloudFileDirectory as follows:
var fileshare = storageAccount.CreateCloudFileClient().GetShareReference("myshare");
var rootDir = fileshare.GetRootDirectoryReference();
var dir = rootDir.GetDirectoryReference("2017-10-24/15/52");
var items=dir.ListFilesAndDirectories();
For access the specific file under the subdirectory, you could use the GetFileReference method to return the CloudFile instance as follows:
var file=rootDir.GetFileReference("2017-10-24/15/52/2017-10-13-2.png");

Delete folder in Google Cloud Storage using nodejs gcloud api

I am using gcloud nodejs api to access Google Cloud Storage. I can save/delete/exists files individually, but I didn't find a way to delete a folder or even to list the files in a folder using gcloud nodejs api.
I have seen people say that the folder hierachy in GCS is not a real tree structure, but just names. So I tried to use wildcard to match the file name string, which did not succeed.
I wonder if there is any way to do it. If not, what tool should I use?
The code to list files in a directory should look something like this:
bucket.getFiles({ prefix: 'directoryName/' }, function(err, files) {})
And to delete:
bucket.deleteFiles({ prefix: 'directoryName/' }, function(err) {})
getFiles API documentation
deleteFiles API documentation
Instead of using gcloud nodejs api, there are two other ways to do this.
Use the googleapis package to access the standard JSON API and XML API of gcs. googleapis is a lower level API tool which includes interacting with google cloud services. We can create/list/delete files on gcs. Documentation and examples:
https://cloud.google.com/storage/docs/json_api/v1/objects/delete
https://cloud.google.com/storage/docs/json_api/v1/objects/list
Use childe_process to execute the gsutil commmand line tool. This is not a standard way of programatically accessing the google api, but still a viable solution.Wildcard is allowed when issuing the command. Note that is may not work on the google app engine. Here is an example.
Nodejs
var exec = require('child_process').exec;
exec("gsutil rm gs://[bucketname]/[directory ]/*" , function(error,stdout,stderr){});
As Stephen suggested, using standard gcloud method bucket.getFiles and bucket.deleteFiles is the most desirable approach. Since gcs don't have the concept of directories, the manipulation of multiple files obviously should be considered as a bucket level operation.

getting blob Uri without connecting to Azure

I've created a file system abstraction, where I store files with a relative path, e.g /uploads/images/img1.jpg.
These can then be saved both on local file system (relative to folder), or Azure. Then, I can also ask a method to give me the url to access that relative path.
In Azure, currently this is being done similar to the below:
public string GetWebPathForRelativePathOnUserContentStorage(string relativeFileFullPath)
{
var container = getCloudBlobContainer();
CloudBlockBlob blob = container.GetBlockBlobReference(relativeFileFullPath);
return blob.Uri.ToString();
}
On a normal website, there might be say 40 images in one page - So this get's called like 40 times. Is this first of all slow? I've noticed there is a particular pattern in the generated URL:
https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?
On a normal website, there might be say 40 images in one page - So
this get's called like 40 times. Is this first of all slow?
Not at all. The code you wrote above does not make any calls to storage. It just creates an instance of CloudBlockBlob object. If you were using GetBlockBlobReferenceFromServer method, then it would have been a different story because that method makes a call to storage.
`I've noticed there is a particular pattern in the generated URL:
_https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?
Absolutely yes. Assuming you're using just standard stuff that would be perfectly fine. Non standard stuff would include things like using a custom domain for your blob storage or connecting to geo-secondary location of your storage account.

Resources