I am quite new in adf and I have the following condition.
I have a source and a destination.
I want to build a logic where if file is prent in target it should not copy the file from source else it should copy the source file.
The source and sink are different so with two getmetadata I am trying to get the files with childite.
With a set variable activity I am passing the output of get metadatas for comparison.
I am failing at the logic stage.
Please let me know if any other better approach is possible
Logic:If file is present in destination it should not copy file from source else it should copy
You can use Filter activity
In items : #activity('files from source').output.childItems
Conditions: #not(contains(activity('files from target').output.childItems,item()))
similar thread: ADF-how to compare two get metadata activity results and skip if the same name exists
Related
I have a CSV File in the Following format which want to copy from an external share to my datalake:
Test; Text
"1"; "This is a text
which goes on on a second line
and on on a third line"
"2"; "Another Test"
I do now want to load it with a Copy Data Task in an Azure Synapse Pipeline. The result is the following:
Test; Text
"1";" \"This is a text"
"which goes on on a second line";
"and on on a third line\"";
"2";" \"Another Test\""
So, yo see, it is not handling the Multi-Line Text correct. I also do not see an option to handle multiline text within a Copy Data Task. Unfortunately i'm not able to use a DataFlow Task, because it is not allowing to run with an external Azure Runtime, which i'm forced to use, due to security reasons.
In fact, i'm of course not speaking about this single test file, instead i do have x thousands of files.
My settings for the CSV File look like follows:
CSV Connector Settings
Can someone tell me how to handle this kind of multiline data correctly?
Do I have any other options within Synapse (apart from the Dataflows)?
Thanks a lot for your help
Well turns out this is not possible with a CSV File.
The pragmatic solution is to use "binary" files instead, to transfer the CSV Files and only load and transform them later on with a Python Notebook in Synapse.
You can achieve this in azure data factory by iterating through all lines and check for delimiter in each line. And then, use string manipulation functions with set variable activities to convert multi-line data to a single line.
Look at the following example. I have a set variable activity with empty value (taken from parameter) for req variable.
In lookup, create a dataset with following configuration to the multiline csv:
In foreach, where I iterate each row by giving items value as #range(0,sub(activity('Lookup1').output.count,1)). Inside for each, I have an if activity with following condition:
#contains(activity('Lookup1').output.value[item()]['Prop_0'],';')
If this is true, then I concat the current result to req variable using 2 set variable activities.
temp: #if(contains(activity('Lookup1').output.value[add(item(),1)]['Prop_0'],';'),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],decodeUriComponent('%0D%0A')),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' '))
actual (req variable): #variables('val')
For false, I have handled the concatenation in the following way:
temp1: #concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' ')
actual1 (req variable): #variables('val2')
Now, I have used a final variable to handle last line of the file. I have used the following dynamic content for that:
#if(contains(last(activity('Lookup1').output.value)['Prop_0'],';'),concat(variables('req'),decodeUriComponent('%0D%0A'),last(activity('Lookup1').output.value)['Prop_0']),concat(variables('req'),last(activity('Lookup1').output.value)['Prop_0']))
Finally, I have taken copy data activity with a sample source file with 1 column and 1 row (using this to copy our actual data).
Now, take source file configuration as shown below:
Create an additional column with value as final variable value:
Create a sink with following configuration and select mapping for only above created column:
When I run the pipeline, I get the data as required. The following is an output image for reference.
I have 10 files in a folder and want to move 4 of them in a different location.
I tried 2 approaches to achieve this -
using lookup to retrieve the filenames from a json file- then feeding it to a for each iterator
using metadata to get file names from source folder and then adding if condition inside a for each to copy the files.
But in both the cases, all the files in source folder gets copied.
Any help would be appreciated.
Thanks!!
There a 3 ways you can consider selecting your files depending on the requirement or blockers.
Checkout official MS doc: Copy activity properties
1. Dynamic content for FilePath property in Source Dataset.
2. You can use Wildcard character in the source folder and file path in the source Dataset.
Allowed wildcards are: * (matches zero or more characters) and ?
(matches zero or single character); use ^ to escape if your actual
folder name has wildcard or this escape char inside. See more
examples in Folder and file filter
examples.
3. List of Files
Point to a text file that includes a list of files you want to copy,
one file per line, which is the relative path to the path configured
in the dataset. When using this option, do not specify file name in
dataset. See more examples in File list
examples.
Example:
Parameterize source dataset and set source file name to that which passes the expression evaluation in IfCondition Activity.
I have simple copy data activity with source and destination as a table in Azure Data Factory, Before inserting I'm having delete script in the pre-copy script option. The Delete should be done on the basis of parameters passed to the pipeline.
I tried this way but getting error.
DELETE FROM [dbo].[StgMetricLoad] where TransactionKey in(pipeline().parameters.TransactionKey)
Per my experience,you can't merge pipeline string parameter into sql string like that directly. This should be configured as dynamic content with #cancat built-in function.
I tested it in the Set Variable Activity:
#concat('DELETE FROM [dbo].[StgMetricLoad] where TransactionKey in(',
pipeline().parameters.keystring,
')')
Test Output:
I am unable to copy csv files from an SFTP connection to blob storage when using the wildcard(*) in the filename.
More specifically, I receive csv files in the SFTP on a daily basis, and they are of the format: "ddMMyyyyxxxxxx.csv", where "xxxxxx" is the timestamp. More concretely, my csv file for the 13th of March is: "13032019083647.csv", while for the 14th of March: "14032019083556.csv". Obviously, the timestamp is different for every day, thus I want to copy the file independently of whatever strings exists between the date and the the file extenstion.
In the "File" subfield of the "File path" of the "Connection" tab of my subset, I give as input: "13032019*.csv", as instructed by the help icon next to the field:
When I do so, my Debug run fails with:
{"errorCode": "2200", "message":
"ErrorCode=UserErrorInvalidCopyBehaviorBlobNameNotAllowedWithPreserveOrFlattenHierarchy,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot
adopt copy behavior PreserveHierarchy when copying from folder to a
single file.,Source=Microsoft.DataTransfer.ClientLibrary}
I receive a similar error no matter which type of copy behaviour I choose. I have also tried experimenting with the fileFilter parameter (even though ADF warns that the same behaviour can be achieved with the fileName option), but I still end up getting the same error.
For further clarification, I am attaching the Code segment that ADF produces for this configuration:
I should also mention, that when using the full fileName in the corresponding field, namely the value: "13032019083647.csv", copying works normally.
Any help would be greatly appreciated!
My guess it might get two files with wildcard operation.
In such cases we need to use metadata activity, filter activity and for-each activity to copy these files.
1.Metadata activity : Use data-set in these activity to point the particular location of the files and pass the child Items as the parameter.
2.Filter activity : Use filter to filter the files based on your needs.
3.For-each activity : In the For-each activity get Items from the previous activity and add copy activity inside the for-each.
In copy activity the source data set should be #item().name.
I hope this will solve your issue.
What worked for me was the following: I kept the same regex for the input file, but I defined as "Copy behaviour: Merge Files". Since as mentioned, there is only 1 file that satisfies the regex condition, only 1 file was created as output. I am aware that this is a sort of "dirty" solution, but it did the trick for me.
I am using Data Factory v2 and I currently have a simple copy activity which copies files from an FTP server to blob storage. The file names on this server are of the following form :
File_{Year}{Month}{Day}.zip
In order to download the most recent file I add this filter to my input dataset json file :
"fileName": {
"value": "#concat('File_',formatDateTime(utcnow(), 'yyyyMMdd'), '.zip')",
"type": "Expression"
}
I now want to be able to download yesterday's file which is possible using adddays().
However I would like to be able to do this in the same copy activity and it seems that Data Factory v2 does not allow me to use the following kind of regular expression logic :
#concat('File_',formatDateTime(utcnow(), 'yyyyMMdd'), '.zip') || #concat('File_', formatDateTime(adddays(utcnow(), -1), 'yyyyMMdd'), '.zip')
Is this possible or do I need a separate activity ?
It would seem strange to need a second activity since a Copy Activity can only take a single input but if the regex is simple enough, then multiple files are treated as a single input and if not then multiple files are treated as multiple inputs.
The '||' won't work since it will be evaluate as a single string.
But I can provided two solutions for this.
using a tumbling window daily trigger and set the start time as yesterday. So it will trigger two pipeline run.
using Foreach activity + copy activity. The foreach activity iterate an array to pass the yesterday and today to the copy activity.
Btw, you could just use string interpolation expression instead of concat. They are the same.
File_#{formatDateTime(utcnow(), 'yyyyMMdd')}.zip
I would suggest you to read about get metadata activity. This can be helpful in your scenario I think.
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-get-metadata-activity
You have itemName property, lastModified property, check it out.