How to set the ADF V1 pipeline to run every Tuesday - azure

I am using ADF V1 in Azure.
I want my pipeline to run every Tuesday at 10:00AM. I know how to set the time but how to set particular day of the week in dataset and pipeline?.
I want my pipeline to run every Tuesday 10:00 AM.
my sample data set
{
"$schema": "http://datafactories.schema.management.azure.com/internalschemas/2015-09-01/Microsoft.DataFactory.table.json",
"name": "SQL-My-Table-DS",
"properties": {
"structure": [
{
"name": "ServiceName",
"type": "String"
}
],
"published": false,
"type": "SqlServerTable",
"linkedServiceName": "MyLinkedService",
"typeProperties": {
"tableName": "[common].[MyTable_Staging]"
},
"availability": {
"frequency": "Week",
"interval": 1,
"offset": "00:00:10"
},
"external": false,
"policy": {}
}
}

If you are using data factory version 1, you can achieve this by setting the availability with frequency month, interval 1, and set the offset with the number of the day you want the pipeline to run.
For example if you want it to run the 9th of each month as you said, you will have something like this:
"availability": {
"frequency": "Month",
"interval": 1,
"offset": "9.00:00:00",
"style": "StartOfInterval"
}
Editing the answer for week also, below code snippet will make pipeline to run every Tuesday.
"availability": {
"frequency": "Week",
"interval": 1,
"offset": "2.00:00:00",
"style": "StartOfInterval"
}

Related

Paypal not charging the customer after the trail ends

I am making paypal subscription plans. I am facing two problems. First, Is there a way to make a plan without trial period? Second, Paypal is not charging customer after trial period ends.
let body = {
"product_id": "PROD-81J67779NH423045A",
"name": obj.pname,
"description": obj.description,
"billing_cycles": [
{
"frequency": {
"interval_unit": "DAY",
"interval_count": 1
},
"tenure_type": "TRIAL",
"sequence": 1,
"total_cycles": 1
},
{
"frequency": {
"interval_unit": obj.duration,
"interval_count": 1
},
"tenure_type": "REGULAR",
"sequence": 2,
"total_cycles": 12,
"pricing_scheme": {
"fixed_price": {
"value": obj.price,
"currency_code": "USD"
}
}
}
],
"payment_preferences": {
"service_type": "PREPAID",
"auto_bill_outstanding": true,
"setup_fee": {
"value": obj.price,
"currency_code": "USD"
},
"setup_fee_failure_action": "CONTINUE",
"payment_failure_threshold": 3
},
"quantity_supported": true,
"taxes": {
"percentage": obj.tax,
"inclusive": false
}
}
You can have a plan without a TRIAL period.
Remove that whole block, and have the REGULAR one be sequence: 1.
As for it 'not charging', you probably haven't waited enough 24 hour cycles for it to kick in. Subscriptions bill in a batch, and so it can take some portion of the day for the initial non-billing trial period to even start, and then a day for that to run.

Azure Data Factory CI/CD for Schedule Triggers Not Working

For triggers it looks like only pipelines pipeline and typeProperties blocks can be overriden based on the documentation.
What I want to achieve is with my CI/CD process and overriding parameters functionality, to have a schedule trigger disabled in the target ADF, unlike my source ADF.
If I inspect the JSON of a trigger that looks like the following field could do the trick "runtimeState": "Started".
{
"name": "name_daily",
"properties": {
"description": " ",
"annotations": [],
"runtimeState": "Started",
"pipelines": [
{
"pipelineReference": {
"referenceName": "name",
"type": "PipelineReference"
}
}
],
"type": "ScheduleTrigger",
"typeProperties": {
"recurrence": {
"frequency": "Day",
"interval": 1,
"startTime": "2020-05-05T13:01:00.000Z",
"timeZone": "UTC",
"schedule": {
"minutes": [
1
],
"hours": [
13
]
}
}
}
}
}
But if I attempt to add it in the JSON file like this:
"Microsoft.DataFactory/factories/triggers": {
"properties": {
"runtimeState": "-",
"typeProperties": {
"recurrence": {
"interval": "=",
"frequency": "="
}
}
}
}
it never shows up in the Override section in Azure Pipeline Releases.
Does this ADF CI/CD functionality exist for triggers? How can I achieve my target here?
Turns out runtimeState for triggers is not obeyed in the arm-template-parameters-definition.json.
The path is clearer after some more research - I can achieve what I want with either editing the Powershell script Microsoft has provided or use an ADF custom task from the Azure Devops marketplace.

Azure Data Factory copy pipeline: get a file with date of yesterday that arrived today

I am getting a txt file (on todays date) with the date of yesterday in it and I want dynamically get this filename in my data factory pipeline.
The file is placed automatically on a file system and I want to copy this file to the blob store In my example below I am simulating this by copying from blob to blob.
For example:
filename_2018-02-11.txt arrived today (2018-03-12) with the date of yesterday(2018-02-11). How can I pick this file up on today's date?
Yesterday's slice did run but there was not a file yet.
Here is my example:
{"$schema": "http://datafactories.schema.management.azure.com/schemas/2015-09-01/Microsoft.DataFactory.Pipeline.json",
"name": "CopyPipeline-fromBlobToBlob",
"properties": {
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "BlobSource",
"recursive": true
},
"sink": {
"type": "BlobSink",
"copyBehavior": "",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
},
"enableSkipIncompatibleRow": true
},
"inputs": [
{
"name": "InputDataset-1"
}
],
"outputs": [
{
"name": "OutputDataset-1"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1,
"offset": "05:00:00"
},
"name": "activity_00"
}
],
"start": "2018-03-07T00:00:00Z",
"end": "2020-03-08T00:00:00Z",
"isPaused": false,
"pipelineMode": "Scheduled"
}
}
You can use EndOfInterval instead of StartOfInterval in the policy. That will use the end of the day instead of the start of the day to do the execution. You may also want to set the appropriate offset if the file is not available at midnight.
In ADF v2 you can use inbuilt variables (#pipeline().TriggerTime):
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-system-variables
And in a source data set (InputDataset-1) put file path/file name as something like this:
#concat('YOUR BASE PATH IN BLOB/', 'filename_',
addhours(pipeline().TriggerTime, -1, 'yyyy'), '-',
addhours(pipeline().TriggerTime, -1, 'MM'), '-',
addhours(pipeline().TriggerTime, -1, 'dd'), '.txt'))
You can also use #trigger().scheduledTime
To have always the same date when e.g. pipeline will fail.
But remember that it is only available in trigger scope.
In my tests it was only evaluated for Schedule trigger.

Azure Data Factory - runs all the time?

I've implemented a Azure DF Job which executes a SQL Stored Proc:
{
"name": "spLoggingProc",
"properties": {
"activities": [
{
"type": "SqlServerStoredProcedure",
"typeProperties": {
"storedProcedureName": "logging"
},
"outputs": [
{
"name": "spEmptyOutput15-4"
}
],
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "spLogging"
}
],
"start": "2017-01-01T00:00:00Z",
"end": "2099-01-01T00:10:00Z",
"isPaused": false,
"hubName": "dwh_hub",
"pipelineMode": "Scheduled"
}
}
The dataset:
{
"name": "spEmptyOutput15-4",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "DWH",
"typeProperties": {
"tableName": "spEmptyOutput15-4"
},
"availability": {
"frequency": "Hour",
"interval": 1
}
}
}
The problem is now, the Proc runs every 2-3 seconds. But frequency is set to every hour. My goal is, to run every hour and every day the proc.
Can anyone please help me?
Thanks a lot!
Please change the start time to today's date and you will not see the issue. Because you have set the start time to start of year, it will run for each day and each hour so it keep on running for 24x166 times before coming to normal routine. Its still running on hourly basis but it has to complete the past runs for each hour, you will see that its running every few seconds. I am sure that your proc is just taking 1-2 seconds to complete.
There is another way to run 10 slices (10 is Maximum value) parallelly to increase the rate. If you want the past data also. Then this will be helpful.
Change the Concurrency value 3 under Policy to run the slices in parallel.
"policy": {
"concurrency": 3,
"executionPriorityOrder": "OldestFirst",
"retry": 3,
"timeout": "00:10:00"
}

How to define a Table in Azure Data Factory

When creating a HDInsight On Demand Linked Resource, the Data Factory creates a new container for the hdinsight. I wonder to know how I can creates a Table that points to that container? Here is my Table definition
{
"name": "AzureBlobLocation",
"properties": {
"published": false,
"type": "AzureBlob",
"linkedServiceName": "AzureBlobLinkedService",
"typeProperties": {
"folderPath": "????/Folder1/Folder2/Output/Aggregation/",
"format": {
"type": "TextFormat"
}
},
"availability": {
"frequency": "Day",
"interval": 1
}
}
}
What should goes instead of '????' that I put in there? the keyword in not accepted.
I should use the keyword 'container' in order to point to the working container.

Resources