This is my file pattern: adm_domain_20180401, adm_domain_20180402, these are from one particular source. same folder also contains adm_agent_20180401, adm_agent_20180402. I want to only copy files from blob to ADL with prefix adm_domain, is there any way to define the file pattern in input data set?
DATASET:
{
"name": "CgAdmDomain",
"properties": {
"published": false,
"type": "AzureBlob",
"linkedServiceName": "flk_blob_dev_ls",
"typeProperties": {
"folderPath": "incoming/{Date}/",
"format": {
"type": "TextFormat"
},
"partitionedBy": [
{
"name": "Date",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "yyyyMMdd"
}
}
]
},
"availability": {
"frequency": "Minute",
"interval": 15
},
"external": true,
"policy": {}
}
}
Are you using ADF V1 or V2? We are working on adding filename wildcard support in ADF V2.
The fileFilter is not available for Azure Blob Storage. If you are looking files at on-premise then you will be able to achieve this by specifying a filter to be used to select a subset of files in the folderPath rather than all files - link
To solely achieve this for Azure Blob Storage use Azure Data Factory Custom activities. Implement the logic through custom code (.NET) and have it as an activity in the pipeline. More info about how to use custom activites - further reading.
Related
I have looked at some posts and documentation on how to specify custom folder paths while creating an azure blob (using the azure data factories).
Official documentation:
https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-azure-blob-connector#using-partitionedBy-property
Forums posts:
https://dba.stackexchange.com/questions/180487/datafactory-tutorial-blob-does-not-exist
I am successfully able to put into date indexed folders, however what I am not able to do is put into incremented/decremented date folders.
I tried using $$Text.Format (like below) but it gives a compile error --> Text.Format is not a valid blob path .
"folderPath": "$$Text.Format('MyRoot/{0:yyyy/MM/dd}/', Date.AddDays(SliceEnd,-2))",
I tried using the PartitionedBy section (like below) but it too gives a compile error --> Only SliceStart and SliceEnd are valid options for "date"
{
"name": "MyBlob",
"properties": {
"published": false,
"type": "AzureBlob",
"linkedServiceName": "MyLinkedService",
"typeProperties": {
"fileName": "MyTsv.tsv",
"folderPath": "MyRoot/{Year}/{Month}/{Day}/",
"format": {
"type": "TextFormat",
"rowDelimiter": "\n",
"columnDelimiter": "\t",
"nullValue": ""
},
"partitionedBy": [
{
"name": "Year",
"value": {
"type": "DateTime",
"date": "Date.AddDays(SliceEnd,-2)",
"format": "yyyy"
}
},
{
"name": "Month",
"value": {
"type": "DateTime",
"date": "Date.AddDays(SliceEnd,-2)",
"format": "MM"
}
},
{
"name": "Day",
"value": {
"type": "DateTime",
"date": "Date.AddDays(SliceEnd,-2)",
"format": "dd"
}
}
]
},
"availability": {
"frequency": "Day",
"interval": 1
},
"external": false,
"policy": {}
}
Any pointers are appreciated!
EDIT for response from Adam:
I also used folder structure directly in FileName as per suggestion from Adam as per below forum post:
Windows Azure: How to create sub directory in a blob container
I used it like in below sample.
"typeProperties": {
"fileName": "$$Text.Format('{0:yyyy/MM/dd}/MyBlob.tsv', Date.AddDays(SliceEnd,-2))",
"folderPath": "MyRoot/",
"format": {
"type": "TextFormat",
"rowDelimiter": "\n",
"columnDelimiter": "\t",
"nullValue": ""
},
It gives no compile error and also no error during deployment. But it throws an error during execution!!
Runtime Error is ---> Error in Activity: ScopeJobManager:PrepareScopeScript, Unsupported unstructured stream format '.adddays(sliceend,-2))', can't convert to unstructured stream.
I think the problem is that FileName can be used to create folders but not dynamic folder names, only static ones.
you should create a blob using the following convention: "foldername/myfile.txt" , so you could also append additional blobs under that foldername. I'd recommend checking this thread: Windows Azure: How to create sub directory in a blob container , It may help you resolve this case.
I have a USQL script stored on my ADL store and I am trying to execute it. the script file is quite big - about 250Mb.
So far i have a Data Factory, I have created a Linked Service and am trying to create a Data lake Analytics U-SQL Activity.
The code for my U-SQL Activity looks like this:
{
"name": "RunUSQLScript1",
"properties": {
"description": "Runs the USQL Script",
"activities": [
{
"name": "DataLakeAnalyticsUSqlActivityTemplate",
"type": "DataLakeAnalyticsU-SQL",
"linkedServiceName": "AzureDataLakeStoreLinkedService",
"typeProperties": {
"scriptPath": "/Output/dynamic.usql",
"scriptLinkedService": "AzureDataLakeStoreLinkedService",
"degreeOfParallelism": 3,
"priority": 1000
},
"policy": {
"concurrency": 1,
"executionPriorityOrder": "OldestFirst",
"retry": 3,
"timeout": "01:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1
}
}
],
"start": "2017-05-02T00:00:00Z",
"end": "2017-05-02T00:00:00Z"
}
}
However, I get the following error:
Error
Activity 'DataLakeAnalyticsUSqlActivityTemplate' from >pipeline 'RunUSQLScript1' has no output(s) and no schedule. Please add an >output dataset or define activity schedule.
What i would like is to have this Activity run on-demand, i.e. I do not want it scheduled at all, and also I do not understand what Inputs and Outputs are in my case. The U-SQL Script I am trying to run is operating on millions of files on my ADL storage and is saving them after some modifiction of the content.
Currently ADF does not support running USQL script stored in ADLS for a USQL activity, i.e. the "scriptLinkedService" under "typeProperties" has to be an Azure Blob Storage Linked Service. We will update the documentation for USQL activity to make this more clear.
Supporting running USQL script stored in ADLS is on our product backlog, but we don't have a committed date for this yet.
Shirley Wang
Currently ADF does not support executing the activity on-demand and it needs to be configured with a schedule. You will need at least one output to drive the schedule execution of the activity. The output can be a dummy Azure Storage one without actually write the data out but ADF leverages the availability properties to drive the schedule execution. For example:
{
"name": "OutputDataset",
"properties": {
"type": "AzureBlob",
"linkedServiceName": "AzureStorageLinkedService",
"typeProperties": {
"fileName": "dummyoutput.txt",
"folderPath": "adf/output",
"format": {
"type": "TextFormat",
"columnDelimiter": "\t"
}
},
"availability": {
"frequency": "Day",
"interval": 1
}
}
}
I have two dataset, one "FileShare" DS1 and another "BlobSource" DS2. I define a pipeline with one copy activity, which needs to copy the files from DS1 to DS3 (BlobSource), with dependency specified as DS2. The activity is specified below:
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "FileShare"
},
"sink": {
"type": "BlobSource"
}
},
"inputs": [
{
"name": "FoodGroupDescriptionsFileSystem"
},
{
"name": "FoodGroupDescriptionsInputBlob"
}
],
"outputs": [
{
"name": "FoodGroupDescriptionsAzureBlob"
}
],
"policy": {
"timeout": "01:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst"
},
"scheduler": {
"frequency": "Minute",
"interval": 15
},
"name": "FoodGroupDescriptions",
"description": "#1 Bulk Import FoodGroupDescriptions"
}
Here, how can i specify multiple source type (both FileShare and BlobSource)? It throws error when i try to pass as list.
The copy activity doesn't like multiple inputs or outputs. It can only perform a 1 to 1 copy... It won't even change the filename for you in the output dataset, never mind merging files!
This is probably intentional so Microsoft can charge you more for additional activities. But let's not digress into that one.
I suggest having 1 pipeline copying both files into some sort of Azure storage using separate activities (1 per file). Then have a second down stream pipeline that has a custom activity to read and merge/concatenate the files to produce a single output.
Remember that ADF isn't an ETL tool like SSIS. Its just there to invoke other Azure services. Copying is about a complex as it gets.
I am getting following error while running a USQL Activity in the pipeline in ADF:
Error in Activity:
{"errorId":"E_CSC_USER_SYNTAXERROR","severity":"Error","component":"CSC",
"source":"USER","message":"syntax error.
Final statement did not end with a semicolon","details":"at token 'txt', line 3\r\nnear the ###:\r\n**************\r\nDECLARE #in string = \"/demo/SearchLog.txt\";\nDECLARE #out string = \"/scripts/Result.txt\";\nSearchLogProcessing.txt ### \n",
"description":"Invalid syntax found in the script.",
"resolution":"Correct the script syntax, using expected token(s) as a guide.","helpLink":"","filePath":"","lineNumber":3,
"startOffset":109,"endOffset":112}].
Here is the code of output dataset, pipeline and USQL script which i am trying to execute in pipeline.
OutputDataset:
{
"name": "OutputDataLakeTable",
"properties": {
"published": false,
"type": "AzureDataLakeStore",
"linkedServiceName": "LinkedServiceDestination",
"typeProperties": {
"folderPath": "scripts/"
},
"availability": {
"frequency": "Hour",
"interval": 1
}
}
Pipeline:
{
"name": "ComputeEventsByRegionPipeline",
"properties": {
"description": "This is a pipeline to compute events for en-gb locale and date less than 2012/02/19.",
"activities": [
{
"type": "DataLakeAnalyticsU-SQL",
"typeProperties": {
"script": "SearchLogProcessing.txt",
"scriptPath": "scripts\\",
"degreeOfParallelism": 3,
"priority": 100,
"parameters": {
"in": "/demo/SearchLog.txt",
"out": "/scripts/Result.txt"
}
},
"inputs": [
{
"name": "InputDataLakeTable"
}
],
"outputs": [
{
"name": "OutputDataLakeTable"
}
],
"policy": {
"timeout": "06:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Minute",
"interval": 15
},
"name": "CopybyU-SQL",
"linkedServiceName": "AzureDataLakeAnalyticsLinkedService"
}
],
"start": "2017-01-03T12:01:05.53Z",
"end": "2017-01-03T13:01:05.53Z",
"isPaused": false,
"hubName": "denojaidbfactory_hub",
"pipelineMode": "Scheduled"
}
}
Here is my USQL Script which i am trying to execute using "DataLakeAnalyticsU-SQL" Activity Type.
#searchlog =
EXTRACT UserId int,
Start DateTime,
Region string,
Query string,
Duration int?,
Urls string,
ClickedUrls string
FROM #in
USING Extractors.Text(delimiter:'|');
#rs1 =
SELECT Start, Region, Duration
FROM #searchlog
WHERE Region == "kota";
OUTPUT #rs1
TO #out
USING Outputters.Text(delimiter:'|');
Please suggest me how to resolve this issue.
Your script is missing the scriptLinkedService attribute. You also (currently) need to place the U-SQL script in Azure Blob Storage to run it successfully. Therefore you also need an AzureStorage Linked Service, for example:
{
"name": "StorageLinkedService",
"properties": {
"description": "",
"type": "AzureStorage",
"typeProperties": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=myAzureBlobStorageAccount;AccountKey=**********"
}
}
}
Create this linked service, replacing the Blob storage name myAzureBlobStorageAccount with your relevant Blob Storage account, then place the U-SQL script (SearchLogProcessing.txt) in a container there and try again. In my example pipeline below, I have a container called adlascripts in my Blob store and the script is in there:
Make sure the scriptPath is complete, as Alexandre mentioned. Start of the pipeline:
{
"name": "ComputeEventsByRegionPipeline",
"properties": {
"description": "This is a pipeline to compute events for en-gb locale and date less than 2012/02/19.",
"activities": [
{
"type": "DataLakeAnalyticsU-SQL",
"typeProperties": {
"scriptPath": "adlascripts\\SearchLogProcessing.txt",
"scriptLinkedService": "StorageLinkedService",
"degreeOfParallelism": 3,
"priority": 100,
"parameters": {
"in": "/input/SearchLog.tsv",
"out": "/output/Result.tsv"
}
},
...
The input and output .tsv files can be in the data lake and use the the AzureDataLakeStoreLinkedService linked service.
I can see you are trying to follow the demo from: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-usql-activity#script-definition. It is not the most intuitive demo and there seem to be some issues like where is the definition for StorageLinkedService?, where is SearchLogProcessing.txt? OK I found it by googling but there should be a link in the webpage. I got it to work but felt a bit like Harry Potter in the Half-Blood Prince.
Remove the script attribute in your U-SQL activity definition and provide the complete path to your script (including filename) in the scriptPath attribute.
Reference: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-usql-activity
I had a similary issue, where Azure Data Factory would not recognize my script files. A way to avoid the whole issue, while not having to paste a lot of code, is to register a stored procedure. You can do it like this:
DROP PROCEDURE IF EXISTS master.dbo.sp_test;
CREATE PROCEDURE master.dbo.sp_test()
AS
BEGIN
#searchlog =
EXTRACT UserId int,
Start DateTime,
Region string,
Query string,
Duration int?,
Urls string,
ClickedUrls string
FROM #in
USING Extractors.Text(delimiter:'|');
#rs1 =
SELECT Start, Region, Duration
FROM #searchlog
WHERE Region == "kota";
OUTPUT #rs1
TO #out
USING Outputters.Text(delimiter:'|');
END;
After running this, you can use
"script": "master.dbo.sp_test()"
in your JSON pipeline definition. Whenever you update the U-SQL script, simply re-run the definition of the procedure. Then there will be no need to copy script files to Blob Storage.
When creating a HDInsight On Demand Linked Resource, the Data Factory creates a new container for the hdinsight. I wonder to know how I can creates a Table that points to that container? Here is my Table definition
{
"name": "AzureBlobLocation",
"properties": {
"published": false,
"type": "AzureBlob",
"linkedServiceName": "AzureBlobLinkedService",
"typeProperties": {
"folderPath": "????/Folder1/Folder2/Output/Aggregation/",
"format": {
"type": "TextFormat"
}
},
"availability": {
"frequency": "Day",
"interval": 1
}
}
}
What should goes instead of '????' that I put in there? the keyword in not accepted.
I should use the keyword 'container' in order to point to the working container.