How to show “if” condition on a use case description? - uml

When we write a use case table * (id, description, actor, precondition, postcondition, basic flow, alternate flow)*, in basic flow, we show plain steps of interactions between the actors and the system. I wonder how to show a condition in the use case basic flow? AFAIK, the basic flow contains plain simple steps one by one for use case. But I cannot show conditions without pseudocode? Are pseudocodes allowed in the basic flow of UML use case description?
What would be steps for below sequence?
For the above diagram, should be the table below?
-------------------------------------------------------------
| ID | UC01 |
-------------------------------------------------------------
| Description | do something |
-------------------------------------------------------------
| Precondition | -- |
-------------------------------------------------------------
| Postcondition | -- |
-------------------------------------------------------------
| Basic flow | 1. actor requests system to do something |
| | 2. if X = true |
| | 2.1 system does step 1 |
| | else |
| | 2.3 system does step 2 |
| | 3. system return results to actor |
-------------------------------------------------------------
| Alternate flow| -- |
-------------------------------------------------------------

In tools like Visual Paradigm you can model flow of events with the if/else and loop conditions, and specify the steps as user input and system response.

Use Alternate and Exceptional flows to document such behavior.
do something and step 1 are clearly of different levels, better put them into separate use cases.
Actor is not the best name for actor's role, let's say it's a User.
I had to change Step 1 to Calculation 1 to avoid confusion.
Example
------------------------------------------------------------------------
| ID | UC01 |
------------------------------------------------------------------------
| Level | User goal, black box |
------------------------------------------------------------------------
| Basic flow | 1. User requests Robot System to do something. |
| | 2. Robot System performs UC02. |
| | 3. Robot System return results to User. |
------------------------------------------------------------------------
------------------------------------------------------------------------
| ID | UC02 |
------------------------------------------------------------------------
| Level | SubFunction, white box |
------------------------------------------------------------------------
| Basic flow | 1. Robot System validates that X is true. |
| | 2. Robot System does Calculation 1. |
------------------------------------------------------------------------
| Alternate flow 1 | Trigger: Validation fails at step 1, X is false. |
| | 2a. Robot System does Calculation 2. |
------------------------------------------------------------------------

Related

Azure data factory - large dependency pipeline

I have a "master" pipeline in Azure Data factory, which looks like this:
One rectangle is Execute pipeline activity for 1 destination (target) Table, so this "child" pipeline takes some data, transform it and save as a specified table. Essentialy this means that before filling table on the right, we have to fill previous (connected with line) tables.
The problem is that this master pipeline contains more than 100 activities and the limit for data factory pipeline is 40 activities.
I was thinking about dividing pipeline into several smaller pipelines (i.e. first layer (3 rectangles on the left), then second layer etc.), however this could cause pipeline to run a lot longer as there could be some large table in each layer.
How to approach this? What is the best practice here?
Had a similar issue at work but I didn't used Execute Pipeline because it is a terrible approach in my case. I have more than 800 PLs to run with multiple parent and child dependencies that can go multiple levels deep depending the complexity of the data plus several restrictions (starting with transforming data for 9 regions in the US reusing PLs). A simplified diagram of one of many cases I have can easily look like this:
The solution:
A master dependency table where to store all the dependencies:
| Job ID | dependency ID | level | PL_name |
|--------|---------------|-------|--------------|
| Token1 | | 0 | |
| L1Job1 | Token1 | 1 | my_PL_name_1 |
| L1Job2 | Token1 | 1 | my_PL_name_2 |
| L2Job1 | L1Job1,L2Job2 | 2 | my_PL_name_3 |
| ... | ... | ... | ... |
From here it is a tree problem:
There are ways of mapping trees in SQL. Once you have all the dependencies mapped from a tree put them in a stage or tracker table:
| Job ID | dependency ID | level | status | start_date | end_date |
|--------|---------------|-------|-----------|------------|----------|
| Token1 | | 0 | | | |
| L1Job1 | Token1 | 1 | Running | | |
| L1Job2 | Token1 | 1 | Succeeded | | |
| L2Job1 | L1Job1,L2Job2 | 2 | | | |
| ... | ... | ... | ... | ... | ... |
We can easily query this table using a Look up activity to get the PLs level 1 to run and use a For Each activity to trigger the target PL to run with a dynamic Web Activity. Then Update the tracker table status, start_date, end_date, etc accordantly per PL.
There are only two PLs orchestrating:
one for mapping the tree and assign some type of unique ID for that batch.
two for validation (verifies status of parent PLs and controls which PL to run next)
Note: Both call a store procedure with some logic depending the case
I have a recursive call to the validation PL each time a target pipeline ends:
Lets assume L1Job1 and L1Job2 are running in parallel:
L1Job1 ended successful -> calls validation PL -> validation triggers L2Job1 only if L1job1 and L1Job2 have a succeeded status.
If L1Job2 hasn't ended the validation PL ends without triggering L2Job1.
Then L1Job2 ended successful -> calls validation PL -> validation triggers L2Job1 only if L1job1 and L1Job2 have a succeeded status.
L2Job1 starts running after passing the validations.
Repeat for each level.
This works because we already mapped all the PL dependencies in the job tracker and we know exactly which PLs should run.
I know this looks complicated and maybe can't apply to your case but I hope this can give you or others a clue on how to solve complex data workflows in Azure Data Factory.
Yes as per documentation, Maximum activities per pipeline, which includes inner activities for containers is 40 only.
So, there is only option left is splitting your pipeline in to multiple small pipelines.
Please check below link to know limitations on ADF
https://github.com/MicrosoftDocs/azure-docs/blob/master/includes/azure-data-factory-limits.md

Data help and tips

I tried to make a calender visual, like here in the forum that I posted. Unfortunatelly I don't see an other option than Restructuring my data.
My currently data is as following
Project ID | Start Phase | Planning Phase | Execution Phase | Monitor Phase | live
22193 | 18-01-2021 | 20-01-2021 | 28-01-2021 | 20-02-2021 | 03-03-2021
29193 | 20-01-2021 | 10-02-2021 | 15-02-2021 | 19-03-2021 |
87596 | 25-02-2021 | 10-03-2021 | 25-03-2021 | 30-03-2021 | 10-04-2021
And I think my restructured data will be as followed.
Phase ID | Project ID | Phase | |Start | | End |
1 | 22193 | Start | |18-1-2021| |20-1-2021
2 | 22193 | Planning| |20-1-2021| |28-1-2021
3 | 22193 | Execition| |28-1-2021| |20-2-2021
4 | 22193 | Monitor | |20-2-2021| |03-03-202
5 | 22193 | Live | | | |03-03-2021
My question is: Is there a way to add phases in my data with every Project ID in Power BI (Dax, M Or tables?). Because doing it manualy whill take for ever.
And a sidequestion, will this restructuing my data affecy my sql database?
Edit:
The top structure did not meet my needs because I want to make a calendar visual in Power BI. Unfortunatey there is no visual which accepts multiple dates, The current visuals in Power Bi only accepts 1 start and 1 end date. That is the reason to restructure my data to be able to have 1 start and 1 end date
yours faithfully

Cucumber: nested scenario outline cycles

Scenario to automate:
Given <precondition> was fulfilled
And <user> is authorized
When user requests <endpoint>
Then user should receive <code> response
Test data matrix:
| precondition | endpoint | user1 | user 2 | ....
| | /users | OK | Not Found |
| | /roles | OK | OK |
| | /create_user | OK | OK |
| object user exists | /update_user | OK | OK |
| object user exists | /delete_user | OK | OK |
| | /create_data_role | OK | Not Found |
| data role exists | /update_data_role | OK | Not Found |
....
There's around 20 users with different role combination and around 20 endpoints.
Need to verify each endpoint for each user - so it should be a nested cycle.
How do I do it?
Don't do this in Cucumber - reasons
1) You get no benefit from putting all these routes and conditions in Gherkin. Nobody can read them and make sense of them especially if you trying something combinatorial
2) Cuke scenarios run slowly, and you want to run lots of them, you could dramatically reduce your run time by writing a fast unit test instead.
3) If you write this test in code you can write it much more elegantly than you can in Gherkin.
4) Dealing with errors is painful (as you've already pointed out)
You are using the wrong tool for this particular job, use something else.
Yet I come up with this option, but it doesn't follow gherkin convention because When and Then steps a jammed in one
1. Preconditions moved to #Before hook
2. Scenario
Given <user> is authorized
Then <user> requests functionality appropriate response code should be received
| ENDPOINT | USER1 | USER2|
| /users | 200 | 404 |
| /create_user | 200 | 404 |
| /update_user | 200 | 404 |
Examples:
| username |
| USER1 |
| USER2 |
It's also inconvenient because when tests failed it takes time to identify faulty endpoint(S)

Should all fields that are visible on a screen be validated in Gherkin?

We are creating Gherkin feature files for our application to create executable specifications. Currently we have files that look like this:
Given product <type> is found
When the product is clicked
Then detailed information on the product appears
And the field text has a value
And the field price has a value
And the field buy is available
We are wondering if this whole list of and keywords that validate if fields are visible on the screen is the way to go, or if we should shorten that to something like 'validate input'.
We have a similar case in that our service can return a lot of 10's of elements for each case that we could validate. We do not validate every element for each interaction, we only test the elements that are relevant to the test case.
To make it easier to maintain and switch which elements we are using, we use scenario outlines and tables of examples.
Scenario Outline: PO Boxes correctly located
When we search in the USA for "<Input>"
Then the address contains
| Label | Text |
| PO Box | <PoBox> |
| City name | <CityName> |
| State code | <StateCode> |
| ZIP Code | <ZipCode> |
| +4 code | <ZipPlus4> |
Examples:
| ID | Input | PoBox | CityName | StateCode | ZipCode |
| 01 | PO Box 123, 12345 | PO Box 123 | Boston | MA | 12345 |
| 02 | PO Box 321, Whitefish | PO Box 123 | Whitefish | MN | 54321 |
By doing it this way, we have a generic step "the address contains" that uses the 'Label' and 'Text' to test the individual elements. It is a neat and tidy way to test a lot of potential combinations - but it probably depends on your individual use case - how important all of the fields are.
You only need to validate the ones that provide business value, which is probably all of them. I would avoid using tech terms like "field" because it isn't related to a behavior. Al Mills is right on for using the tables.
I'd word it like this:
Scenario Outline: Review product details
Given I find the product <Type>
When I select the product
Then detailed information on the product appears including
| Description | <Description> |
| Price | <Price> |
And I can buy the product
Examples:
| Type | Description | Price |
| Hose | Rubber Hose | 31.99 |
| Sprinkler | Rotating Sprinker | 12.99 |
The words I chose are behaviors or whats, not technical implementations or hows.

What are the Lexing Errors in Cucumber?

I was trying to run a simple Feature file but i was getting Exception like :
Exception in thread "main" cucumber.runtime.CucumberException: Error parsing feature file.
which is Caused by: gherkin.lexer.LexingError: Lexing error
i am trying to parametrized a When statement and got this Exception:
Scenario: Login to Gmail
Given User is on Gmail login page
When User enters <userName> and <pwd>
And Clicks on login button
Then User should redirect to home page
scenario outline(tried Examples as well but didn't worked):
|userName | pwd |
|ravivani10 | abc |
The correct syntax for a scenario outline is to start with the keyword Scenario Outline: and list the examples with the Examples: keyword.
Scenario Outline: Login to Gmail
Given User is on Gmail login page
When User enters <userName> and <pwd>
And Clicks on login button
Then User should redirect to home page
Examples:
| userName | pwd |
| ravivani10 | abc |
I had this same problem but I was using correct syntax. Turns out my formatting was wrong, yes you read correctly: formatting. My scenario looked like this:
Scenario Outline: Confirm that hitting the endpoint returns the expected data
Given uri url/to/a/service/to/test/param/{interval} and method GET
And system user
When I call the web service
Then I expect that 'http status is' '200'
And the following rules must apply to the response
| element | expectation | value |
| $ | is not null | |
| objectType | value = | Volume |
| objectData | is not null | |
| objectData | count = | 1 |
| objectData[0].value | is not null | |
| objectData[0].value | data type is | float |
| objectData[0].value | value = | <value> |
Examples:
| interval | value |
| int1 | 355.77 |
| int2 | 332.995 |
| int3 | 353.71125 |
Above test will fail with Lexing Error. Now take a look at the indentation of the Example piece of my test (it is indented one level down the Scenario Ouline).
If I indent my test as follows (same level as Scenario Outline):
Scenario Outline: Confirm that hitting the endpoint returns the expected data
Given uri url/to/a/service/to/test/param/{interval} and method GET
And system user
When I call the web service
Then I expect that 'http status is' '200'
And the following rules must apply to the response
| element | expectation | value |
| $ | is not null | |
| objectType | value = | Volume |
| objectData | is not null | |
| objectData | count = | 1 |
| objectData[0].value | is not null | |
| objectData[0].value | data type is | float |
| objectData[0].value | value = | <value> |
Examples:
| interval | value |
| int1 | 355.77 |
| int2 | 332.995 |
| int3 | 353.71125 |
Above test will pass. Totally dumb to me but that's how it works.
This can be caused by not having the final | at the end of each line of data. That's not the reason for the OP but might help someone else.
A lexing error from cucumber just means that the feature file wasn't in the format that cucumber is expecting. This could be things like having a scenario title with no content or having the title "Feature: blah" twice. This will happen even if the error isn't in the scenario that you are running.
The lexing error will usually give you a line number. Can you post the line it complains about please?
I got the same error and it was caused by having a space between the word 'Outline' and the colon symbol
Scenario Outline : Convert currencies
When I removed the space, I had this:
Scenario Outline: Convert currencies
..and the issue got resolved.
To find out the offender, check your error logs and you will find in the output the line number where the error lies. I hope this helps someone
You need to do a couple of things: A) remove the spaces in Feature : and Scenario Outline : keywords; and B) change the Scenario Outline to Scenario (or add the missing examples for the outline).
If you run this feature:
Feature: Proof of concept that my framework works
Scenario: My first Test
Given this is my first test
When This is my second step
Then This is the final step
Then cucumber will output the to-be-completed step definitions:
You can implement step definitions for undefined steps with these snippets:
Given(/^this is my first test$/) do
pending # Write code here that turns the phrase above into concrete actions
end
When(/^This is my second step$/) do
pending # Write code here that turns the phrase above into concrete actions
end
Then(/^This is the final step$/) do
pending # Write code here that turns the phrase above into concrete actions
end

Resources