Cloud Function storage trigger on folder of a particular Bucket - node.js

I have a scenario for executing a cloud function when something is changed in particular folder of a bucket. While I am deploying a function using cli and passing BUCKET/FOLDERNAME as a trigger, it was giving me an error invalid arguments. Is there any one to give trigger at FOLDER level?

You can only specify a bucket name. You cannot specify a folder within the bucket.
A key point to note is that the namespace for buckets is flat. Folders are emulated, they don't actually exist. All objects in a bucket have the bucket as the parent, not a directory.
What you can actually do is implement an if condition inside of your function to only do stuff if the request contains an object with the name of your folder. Keep in mind that by following this approach your function will still be triggered for every object uploaded to your bucket.

Related

How can i add a file to my AWS SAM lambda function runtime?

While working with aws i need to load a WSDL file in order to setup a soap service. The problem I now encounter however is that i don't know how i can possibly add a file to the docker container running my lambda function so that i can just read the file inside my lambda like in the code snippet below.
const files = readdirSync(__dirname + pathToWsdl);
files.forEach(file => {
log.info(file);
});
any suggestions on how i can do this are greatly appreciated!
Here are a few options:
If the files are static and small, you can bundle them in the Lambda package.
If the files are static or change infrequently then you can store them in S3 and pull from there on Lambda cold start.
If the files need to be accessed and modified by multiple Lambda functions concurrently or if you have a large volume of data with low-latency access requirements, then use EFS.
EFS is overkill for a small, static file. I would just package the file with the Lambda function.

Long polling AWS S3 to check if item exists?

The context here is simple, there's a lambda (lambda1) that creates a file asynchronously and then uploads it to S3.
Then, another lambda (lambda2) receives the soon-to-exist file name and needs to keep checking S3 until the file exists.
I don't think S3 triggers will work because lambda2 is invoked by a client request
1) Do I get charged for this kind of request between lambda and S3? I will be polling it until the object exists
2) What other way could I achieve this that doesn't incur charges?
3) What method do I use to check if a file exists in S3? (just try to get it and check status code?)
This looks like you should be using an S3 objectCreated trigger on the Lambda. That way, whenever an object gets created, it will trigger your Lambda function automatically with the file metadata.
See here for information on configuring an S3 event trigger
Let me make sure I understand correctly.
Client calls Lambda1. Lambda1 creates a file async and uploads to S3
the call to lambda one returns as soon as lambda1 has started it's async processing.
Client calls lambda2, to pull the file from s3 that lambda1 is going to push there.
Why not just wait for Lambda one to create the file and return it to client? Otherwise this is going to be an expensive file exchange.

Amazonka: How to generate S3:// uri from Network.AWS.S3.Types.Object?

I've been using turtle to call "aws s3 ls" and I'm trying to figure out how to replace that with amazonka.
Absolute s3 urls were central to how my program worked. I now know how to get objects and filter them, but I don't know how to convert an object to an S3 url to integrate with my existing program.
I came across the getFile function and tried downloading a file from s3.
Perhaps I had something wrong, but it didn't seem like just the S3 Bucket and S3 Object key were enough to download a given file. If I'm wrong about that I need to double check my configuration.

IFileProvider Azure File storage

I am thinking about implementing IFileProvider interface with Azure File Storage.
What i am trying to find in docs is if there is a way to send the whole path to the file to Azure API like rootDirectory/sub1/sub2/example.file or should that actually be mapped to some recursion function that would take path and traverse directories structure on file storage?
just want to make sure i am not missing something and reinvent the wheel for something that already exists.
[UPDATE]
I'm using Azure Storage Client for .NET. I would not like to mount anything.
My intentention is to have several IFileProviders which i could switch based on Environment and other conditions.
So, for example, if my environment is Cloud then i would use IFileProvider implementation that uses Azure File Services through Azure Storage Client. Next, if i have environment MyServer then i would use servers local file system. Third option would be environment someOther with that particular implementation.
Now, for all of them, IFileProvider operates with path like root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to get sub3 info/content or should the path be broken into individual directories and get reference/content for each step?
I hope that clears the question.
Now, for all of them, IFileProvider operates with path like ˙root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to getsub3` info/content or should the path be broken into individual directories and get reference/content for each step?
For access the specific subdirectory across multiple sub directories, you could use the GetDirectoryReference method for constructing the CloudFileDirectory as follows:
var fileshare = storageAccount.CreateCloudFileClient().GetShareReference("myshare");
var rootDir = fileshare.GetRootDirectoryReference();
var dir = rootDir.GetDirectoryReference("2017-10-24/15/52");
var items=dir.ListFilesAndDirectories();
For access the specific file under the subdirectory, you could use the GetFileReference method to return the CloudFile instance as follows:
var file=rootDir.GetFileReference("2017-10-24/15/52/2017-10-13-2.png");

Setting Metadata in Google Cloud Storage (Export from BigQuery)

I am trying to update the metadata (programatically, from Python) of several CSV/JSON files that are exported from BigQuery. The application that exports the data is the same with the one modifying the files (thus using the same server certificate). The export goes all well, that is until I try to use the objects.patch() method to set the metadata I want. The problem is that I keep getting the following error:
apiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/storage/v1/b/<bucket>/<file>?alt=json returned "Forbidden">
Obviously, this has something to do with bucket or file permissions, but I can't manage to get around it. How come if the same certificate is being used in writing files and updating file metadata, i'm unable to update it? The bucket is created with the same certificate.
If that's the exact URL you're using, it's a URL problem: you're missing the /o/ between the bucket name and the object name.

Resources