How to create a Unbounded task flow in Oracle Maf - oracle-adf-mobile

When we create a task flow from maf-features.xml , only bounded task flow is created . How should we create a unbounded task flow so that it could go back to the parent task flow. And also what is the use of the adfc-mobile-config.xml unbounded task flow created automatically when we create the MAF application.

From the docs: A MAF AMX application feature always contains one unbounded task flow, which provides one or more entry points to that application feature. An entry point is represented by a view activity. By default, the source file for the unbounded task flow is the adfc-mobile-config.xml file.
Consider using an unbounded task flow if the following applies:
■ There is no need for the task flow to be called by another task flow.
■ The MAF AMX application feature has multiple points of entry.
■ There is no need for a specifically designated activity to run first in the task flow (default activity).
An unbounded task flow can call a bounded task flow, but cannot be called by another task flow.
Unbounded Task Flows are there to allow you to navigate to an AMX page and then usually you will call a bounded task flow from there to do more detailed work.
I do not understand what you mean by: "so that it could go back to the parent task flow"
Unbounded task flows cannot be called so not sure what you mean by a "parent" in this context.
If you need to navigate back to content that resides on an unbounded task flow, you could navigate using a task flow return (from the called task flow).

Related

Cross Job Dependencies in Databricks Workflow

I am trying to create a data pipeline in Databricks using Workflows UI. I have significant number of tasks which I wanted to split across multiple jobs and have dependencies defined across them. But it seems like, in Databricks there cannot be cross job dependencies, and therefore all tasks must be defined in the same job, and dependencies across different tasks can be defined. This is resulting in a very big and messy job diagram.
Is there any better way here ?
P.S. I have access only to the UI portal, won't be able to execute Jobs API ( if there is some way to do this is via API )
It's possible to trigger another job but you will need to use REST API for that, plus you will need to handle it's execution, etc.
But ability to have another job as a subtask is coming - if you watch recent quarterly roadmap webinar, you will see a slide about "Enhanced control flow" that mentions "Trigger another job" functionality.

can azure durable functions support, cancel, suspend and retry/replay activites?

we are exploring azure durable functions to facilitate the following requirements, can the durable functions support these ?
a user can suspend the workflow and resume when he wants
if an activity of workflow fails, the user can retry that particular activity with human intervention as many number of time he wants
a user can choose from which step the workflow can be resumed, to be more brief if 2 of the 4 steps have been executed in the workflow the user can re-run the workflow from the beginning (Step 1)
Thanks
It's possible to "stop" the execution using External Event to stop the processing of an execution. You can restart the execution using Rewind API, but it will restart the whole workflow, not particular tasks.
More info: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-instance-management?tabs=csharp#rewind-instances-preview
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-external-events?tabs=csharp

How is ForEach activity declaring success in Data Factory?

In the following scenario we have a ForEach activity running in an Azure Data Factory pipeline to copy data from source to destination.
The last CopyActivity took 4:10:33 but the ForEach activity declared Succeeded 36 Minutes later: 4:46:12.
The question is, why ForEach activity need this 36 Minutes extra?
Is it the case that the ForEach needs also to consolidate results from subactivities before declaring success or fail?
Official answer from Microsoft: ForEach activity does wait for all inner activity runs to complete. In theory, there should not be much delay on marking foreach run success after the last activity run within it succeed. However, ADF rely on partner service to execute the runs and it's possible that the partner service run into failures and could not complete foreach in time. They have build in logic to keep retry and recover but the behavior in ADF activity runs is delay. It's also possible that orchestration service fails and partner service keep retry on calling us. but usually partner service delay is the main cause here.
Our assumption: Duration time is end-to-end for the pipeline activity. That takes into account all factors like marshaling of your data flow script from ADF to the Spark cluster, cluster acquisition time, job execution, and I/O write time. Due to ADF is serverless compute, I think ForEach needs time to wait for all activities to acquire and release computing resources, but this is my guess because there are few official explanations.
So there will be a delay time, which varies according to internal activities.
Official answer from Microsoft: ForEach activity does wait for all inner activity runs to complete. In theory, there should not be much delay on marking foreach run success after the last activity run within it succeed. However, ADF rely on partner service to execute the runs and it's possible that the partner service run into failures and could not complete foreach in time. They have build in logic to keep retry and recover but the behavior in ADF activity runs is delay. It's also possible that orchestration service fails and partner service keep retry on calling us. but usually partner service delay is the main cause here.

Task vs Service for database operations

What is the difference between JavaFX 8 Task and Service and in which case is it better to use one over the other? What is better to use in database operations?
Main Difference between Task and Service - One Time versus Repeated Execution
A Task is a one off thing - you can only use a Task once. If you want to perform the same Task again, you need to construct a new Task instance.
A Service has a reusable interface so that you can start and restart a single service instance multiple times. Behind the scenes, it just takes a Task definition as input and creates new tasks as needed.
Example Use Cases
Task Example => monitoring and reporting progress of a long running startup task on application initialization, like this Splash Page example.
Service Example => The internal load worker implementation for WebEngine where the same task, loading a page asynchronously, needs to be repeated for each page loaded.
Recommendation - Initially try to solve your problem using only a Task and not a Service
Until you are more familiar with concurrency in JavaFX, I'd advise sticking to just using a Task rather than a Service. Tasks have a slightly simpler interface. You can accomplish most of what a Service does simply by creating new Task instances when you need them. If, after understanding Task, you find yourself wanting a predefined API for starting or restarting Tasks, then start using Service at that time.
Database Access Sample using Tasks
Either Task or Service will work for performing database operations off of the JavaFX application thread. Which to use depends on your personal coding preference as well as the particular database operation being performed.
Here is an example which uses a Task to access a database via JDBC. The example was created for JavaFX - Background Thread for SQL Query.
Background Information
The JavaFX concurrency tutorial provides a good overview of Task and Service.
There is excellent documentation in the Task and Service javadoc, including sample code for example use cases.
Worker, Task and Service definitions (from Javadoc)
Task and Service are both Workers, so they have this in common:
A Worker is an object which performs some work in one or more background threads, and whose state is observable and available to JavaFX applications and is usable from the main JavaFX Application thread.
Task definition:
A fully observable implementation of a FutureTask. Tasks expose additional state and observable properties useful for programming asynchronous tasks in JavaFX . . Because Service is designed to execute a Task, any Tasks
defined by the application or library code can easily be used with a
Service.
Service definition:
A Service is a non-visual component encapsulating the information
required to perform some work on one or more background threads. As
part of the JavaFX UI library, the Service knows about the JavaFX
Application thread and is designed to relieve the application
developer from the burden of managing multithreaded code that interacts
with the user interface. As such, all of the methods and state on the
Service are intended to be invoked exclusively from the JavaFX
Application thread.
Service implements Worker. As such, you can observe the state of the
background operation and optionally cancel it. Service is a reusable
Worker, meaning that it can be reset and restarted. Due to this, a
Service can be constructed declaratively and restarted on demand.

Scheduled Tasks with Sql Azure?

I wonder if there's a way to use scheduled tasks with SQL Azure?
Every help is appreciated.
The point is, that I want to run a simple, single line statement every day and would like to prevent setting up a worker role.
There's no SQL Agent equivalent for SQL Azure today. You'd have to call your single-line statement from a background task. However, if you have a Web Role already, you can easily spawn a thread to handle this in your web role without having to create a Worker Role. I blogged about the concept here. To spawn a thread, you can either do it in the OnStart() event handler (where the Role instance is not yet added to the load balancer), or in the Run() method (where the Role instance has been added to the load balancer). Usually it's a good idea to do setup in the OnStart().
One caveat that might not be obvious, whether you execute this call in its own worker role or in a background thread of an existing Web Role: If you scale your Role to, say, two instances, you need to ensure that the daily call only occurs from one of the instances (otherwise you could end up with either duplicates, or a possibly-costly operation being performed multiple times). There are a few techniques you can use to avoid this, such as a table row-lock or an Azure Storage blob lease. With the former, you can use that row to store the timestamp of the last time the operation was executed. If you acquire the lock, you can check to see if the operation occurred within a set time window (maybe an hour?) to decide whether one of the other instances already executed it. If you fail to acquire the lock, you can assume another instance has the lock and is executing the command. There are other techniques - this is just one idea.
In addition to David's answer, if you have a lot of scheduled tasks to do then it might be worth looking at:
lokad.cloud - which has good handling of periodic tasks - http://lokadcloud.codeplex.com/
quartz.net - which is a good all-round scheduling solution - http://quartznet.sourceforge.net/
(You could use quartz.net within the thread that David mentioned, but lokad.cloud would require a slightly bigger architectural change)
I hope it is acceptable to talk about your own company. We have a web based service that allows you to do this. You can click this link to see more details on how to schedule execution of SQL Azure queries.
The overcome the issue of multiple roles executing the same task, you can check for role instance id and make sure that only the first instance will execute the task.
using Microsoft.WindowsAzure.ServiceRuntime;
String g = RoleEnvironment.CurrentRoleInstance.Id;
if (!g.EndsWith("0"))
{
return;
}

Resources