List files from s3 greater than some lastModified date - node.js

We are using https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property method of s3 in Node JS lambda to get all objects currently. This returns all the objects upto 1000. Is there any way to get the files whose lastModified date > input last Modified date from s3 using this method?

This isn't possible via the S3 API.
The best you can do is get creative with your object naming scheme, and name things in reverse alphabetical order. Starting with something like ZZZZZZZZZZZ, then ZZZZZZZZZZY, etc.

Related

Azure Stream Analytic Output

Azure Stream Analytic storing directory like this. Earlier we used datepath
(2022/04/14) for path pattern and its output store separate directory in datalake. Now it has changed storing directory like
'2022%2F4%2F19'. How to solve this problem?
When you configure the Azure Stream Analytic Output you can mention the Path pattern, Date format, and Time format. All are optional properties we can use to filter out.
Path pattern
You can mention the Path pattern which is used to locate the blob within the specified container.
Do not include a path pattern if you wish to read blobs from the container's root.
You can specify one or more instances of the following three variables within the path:{date}, {time}, or {partition}`
Example : cluster1/logs/{date}/{time}/{partition}
Date format
The date format in which the files are structured if you utilize the date variable in the path.
Example:
YYYY/MM/DD
Time format
The date format in which the files are structured if you utilize the Time variable in the path. Currently, it supports only HH (Hours).
References:
Configure the blob storage as a stream
SO Thread for Dynamic Path pattern.
Stream analysis Custom path pattern

Amazon S3 - How to recursively rename files?

I'm trying to fetch my files via the s3.getObject() method in my node.js backend.
Trouble is, upon uploading the files to my bucket, I failed to replace special characters, dashes, and white-spaces. So, any files that have a Key value of (e.g., a Key with the value of 10th Anniversary Party (Part 1) 1-23-04 has an endpoint of 10th+Anniversary+Party+(Part+1)+1-23-04).
This becomes troublesome when trying to encode the URI for fetching. I'd like to replace all dashes, white-space, and special chars with a simple underscore. I've seen some possible conventions using the aws-cli, however I am unsure what the best command for this is. Any advice would be greatly appreciated.
You could write a program that:
Lists the contents of the bucket
Calls CopyObject() to copy the object to a new Key
Calls DeleteObject() to delete the previous copy
Or, you could take advantage of the fact that the AWS CLI offers a aws s3 mv command that will Copy + Delete for you.
I often simply create an Excel spreadsheet with the existing names, and a formula for determining what name I'd like. Then, I create a third column with:
aws s3 mv [Column 1] [Column 2]
Use Copy Down on the rows to get all the mv commands. Then, copy the column of commands, paste them into the command-line and it will rename all the objects in Amazon S3! (Test with 1-2 lines first, in case there is an error in the formula.)
This might seem primitive, but it's a very quick way to make the changes.

Skip element in BizTalk flat file assembly?

I've been tasked to map an input xml (actually an SAP idoc xml), and to generate a number of flat files. Each input xml may yield multiple output files (one output file per lot number), so I will be using xsl:key and the key() function in my mapping, based on the lot number
The thing is, the lot number itself will not be in the file itself, but the output file name needs to contain that lot number value.
So the question really is: can I map the lot number to the xml and have the flat file assembler skip it when it produces the file? Or is there another way the lot number can be applied as file name by the assembly without having it inside the file itself?
In your orchestration you can set a context property for each output message:
msgOutput(FILE.ReceivedFileName) = "DynamicStuff";
msgOutput then goes to the send shape.
In your send port you set the output file like this:
FixedStuff_%SourceFileName%.xml
The result:
FixedStuff_DynamicStuff.xml
If the value is not required in the message content, don't map it. That's it.
To insert at value in the file name, lot number in this case, you will need to promote that value to the FILE.ReceivedFileName Context Property. Then, you can use the %SourceFileName% Macro as part of the name setting in the Send Port. You can set FILE.ReceivedFileName by either Property Promotion or xpath() in an Orchestration.
Bonus: Sorting and Grouping in xslt is rather unwieldy, which is why I don't do that anymore. Instead, you can use SQL: BizTalk: Sorting and Grouping Flat File Data In SQL Instead of XSL

Node.js - Oracle DB and fetchAsString format

I am stuck on a problem and I am not sure what is the best way to solve it. I have a date column that I want to select and I want to fetch it as a string. Which is great, node-oracledb module has this option with fetchAsString mehotd. But it fetches the date like this for example 10-JAN-16 and I want to fetch it like this 10-01-2016. Is there a way to do that from the node-oracledb module, or I should modify the date after I get the result from the query?
UPDATE: I mean solution without to_char in the query and without query modifications
Check out this section of my series on Working with Dates in JavaScript, JSON, and Oracle Database:
https://dzone.com/articles/working-with-dates-using-the-nodejs-driver
The logon trigger shows an example of using alter session to set the default date format. Keep in mind that there is NLS_DATE_FORMAT, NLS_TIMESTAMP_FORMAT, NLS_TIMESTAMP_TZ_FORMAT.
I only show NLS_TIMESTAMP_TZ_FORMAT because I convert to that type in the examples that follow as I need to do some time zone conversion for the date format I'm using.
Another way to set the NLS parameters is to use environment variables of the same name. Note that this method will not work unless you set the NLS_LANG environment variable as well.

pentaho create archive folder with MM-YYYY

I would like to archive every file in a folder by putting it in another archive folder with a name like this: "Archive/myfolder-06-2014"
My problem is how to retrieve the current month and year and then how to create a folder (if it does not already exist) with these data.
This solution may be a little awkward (due to the required fuss) but it seems to work. The idea is to precompute the target filename in a seperate transformation and store it as a system variable (TARGET_ZIP_FILENAME):
The following diagrams show the settings of selected components.
Get the current time...
Provide the pattern of the target filename as a string constant...
Extract the month and year as formatted integers...
Replace the month in the pattern (the year will work equivalently)
Set the resulting filename as a system variable
The main job will call the transformation and use the system variable as the zip target filename.
Also you have to make sure that the setting Create Parent folder is active:

Resources