I am working on automation test cases using BDD Cucumber. In my test cases I am using scenario outline as follows:
Scenario Outline: My test case
Given: My data is ready
When : My data is <r1>, <r2>, <r3>
Then: My data is <valid>
Examples:
|r1| r2|r3| valid |
|1 | 2 | 3| valid |
Now I want to add many entries of my data in "When" which will make "when" very long. Is it possible, I can use data table inside this example table so that my data is passed and statement does not become too lengthy? I searched for this on official docs, but didn't got any reference to use data tables inside example tables.
Yes, you should be able to put the example tokens in the data table cells:
Scenario Outline: My test case
Given: My data is ready
When : My data is:
| Field | Value |
| R1 | <r1> |
| R2 | <r2> |
| R3 | <r3> |
Then: My data is <valid>
Examples:
| r1 | r2 | r3 | valid |
| 1 | 2 | 3 | valid |
Related
AWS Athena query question;
I have a nested map in my rows, of which I would like to transpose the keys to columns.
I could name the columns explicitly like items['label_a'], but in this case the keys are actually dynamic...
From these rows:
{id=1, items={label_a=foo, label_b=foo}}
{id=2, items={label_a=bar, label_c=bar}}
{id=3, items={label_b=baz, label_c=baz}}
I would like to get a table like so:
| id | label_a | label_b | label_c |
------------------------------------
| 1 | foo | foo | |
| 2 | bar | | bar |
| 3 | | baz | baz |
Is that possible and how to do this in aws athena (presto version 0.172)?
Thanks!
This is not possible in a dynamic manner due to the fact that output columns need to be know to the planner before the query execution starts.
See the previous discussion here: https://github.com/prestosql/presto/issues/2448 and https://github.com/prestosql/presto/issues/1206.
I am a total newbie when it comes to both oData and Logic Apps.
My scenario is as follows:
I have an Azure SQL database with two tables (daily_stats, weekly_stats) for users
I have a Logic App I managed to test successfully but that targets one table, triggered by an HTTP request and initialises a variable using the following expression to get the query
if(equals(coalesce(trigger()['outputs']?['queries']?['filter'],''),''),'1 eq 1',trigger()['outputs']?['queries']?['filter'])
The problem comes with how to query a different table based on what the user passes as an ODATA GET request
I imagine I need a condition and the pseudo code of this would be something like:
For daily stats the ODATA query URL would be
https://myproject.logic.azure.com/workflows/some-guid-here/triggers/manual/paths/invoke/daily_stats/api-version=2016-10-01/&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=my-key-here&filter=userid eq 'richard'
For weekly stats the ODATA query URL would be
https://myproject.logic.azure.com/workflows/some-guid-here/triggers/manual/paths/invoke/weekly_stats/api-version=2016-10-01/&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=my-sig-here&filter=userid eq 'richard'
If it is daily_stats, it queries the daily_stats stored procedure/table for the user = richard
If it is weekly_stats, it queries the weekly_stats stored procedure/table for the user = richard
Edit: Added an ASCII flow diagram
+----------------------+
| HTTP ODATA GET |
| Reguest |
| |
+----------+-----------+
|
|
|
|
v
+-------+---------+
| |
| |
| |
| filter has |
| daily_stats |
| |
+-------+---------+
|
|
|
|
+-------------+ | +--------------+
| | | | |
| | YES | NO | |
| query +<--------------+-----------------+ query |
| daily | | monthly |
| stats | | stats |
| table | | table |
| | | |
+-------------+ +--------------+
There is a switch action, further more information you could refer to here:Create switch statements that run workflow actions based on specific values in Azure Logic Apps.
Below is my sample, switch statements support only equality operators. If you need other relational operators, such as "greater than", use a conditional statement.
I'm using mysql. I want a column to have unique values just in some cases.
Example, the table can have the next vales:
+----+-----------+----------+------------+
| id | user_id | col1 | col2 |
+----+-----------+----------+------------+
| 1 | 2 | no | no |
| 2 | 2 | no | no |
| 3 | 3 | no | yes |
| 4 | 2 | yes | no |
| 5 | 2 | no | yes |
+----+-----------+----------+------------+
I want the no|no to be able to repeat for the same user but no the yes|no combination. Is this possible in mysql? And with knex?
My migration fot that table looks like this
return knex.schema.createTable('myTable', table => {
table.increments('id').unsigned().primary();
table.integer('uset_id').unsigned().notNullable().references('id').inTable('table_user').onDelete('CASCADE').index();
table.string('col1').defaultTo('yes');
table.string('col2').defaultTo('no');
});
That doesn't seem to be easy task to do. You would need partial unique index over multiple columns.
I couldn't spot that mysql would support partial indexes https://dev.mysql.com/doc/refman/8.0/en/create-index.html
So it could Something like what is described here, but using triggers for that seems a bit overkill https://dba.stackexchange.com/questions/41030/creating-a-partial-unique-constraint-for-mysql
I am trying to create a excel file from Matlab with data for multiple cases. The excel file should look something like this:
Case #|____________________________Line 1_____________________________________________|_______ Line 2 _____________ ...
|______Node 1______|______Node 2______|______Node 3______|...|______OverAll_____|
| Min|Max|Mean|Std | Min|Max|Mean|Std | Min|Max|Mean|Std |...| Min|Max|Mean|Std |
|_______________________________________________________________________________|
1| | | | | | | | | | | | | | | | |
2| | | | | | | | | | | | | | | | |
I have the data for each Line>Node in a structured format which I can read through a for loop for a given case. How can I write the values in an excel file? I don't know how to get the next available cell range where I need to place the value. Also, how can I generate such header text dynamically. The number of Nodes and properties (Min/Max/Mean/Std) might change in future.
Thank you for your help. Any suitable tutorial which teaches little advanced xlswrite commands will also help.
Use Activexserver to import whole Excel Functionality in MATLAB using
hApp = actxserver('Excel.Application')
Rest you can use all methods available to Excel Application in MATLAB
Is there any way to reuse data in SpecFlow feature files?
E.g. I have two scenarios, which both uses the same data table:
Scenario: Some scenario 1
Given I have a data table
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
When ...
Scenario: Some scenario 2
Given I have a data table
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
And I have another data table
| Field Name | Value |
| Brand | "Volvo" |
| City | "London" |
When ...
In these simple examples the tables are small and there not a big problem, however in my case, the tables have 20+ rows and will be used in at least 5 tests each.
I'd imagine something like this:
Having data table "Employee"
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
Scenario: Some scenario 1
Given I have a data table "Employee"
When ...
Scenario: Some scenario 2
Given I have a data table "Employee"
And I have another data table
| Field Name | Value |
| Brand | "Volvo" |
| City | "London" |
When ...
I couldn't find anything like this in SpecFlow documentation. The only suggestion for sharing data was to put it into *.cs files. However, I can't do that because the Feature Files will be used by non-technical people.
The Background is the place for common data like this until the data gets too large and your Background section ends up spanning several pages. It sounds like that might be the case for you.
You mention the tables having 20+ rows each and having several data tables like this. That would be a lot of Background for readers to wade through before the get to the Scenarios. Is there another way you could describe the data? When I had tables of data like this in the past I put the details into a fixtures class in the automation code and then described just the important aspects in the Feature file.
Assuming for the sake of an example that "Tom" is a potential car buyer and you're running some sort of car showroom then his data table might include:
| Field | Value |
| Name | Tom |
| Age | 16 |
| Address | .... |
| Phone Number | .... |
| Fav Colour | Red |
| Country | UK |
Your Scenario 2 might be "Under 18s shouldn't be able to buy a car" (in the UK at least). Given that scenario we don't care about Tom's address phone number, only his age. We could write that scenario as:
Scenario: Under 18s shouldnt be able to buy a car
Given there is a customer "Tom" who is under 16
When he tries to buy a car
Then I should politely refuse
Instead of keeping that table of Tom's details in the Feature file we just reference the significant parts. When the Given step runs the automation can lookup "Tom" from our fixtures. The step references his age so that a) it's clear to the reader of the Feature file who Tom is and b) to make sure the fixture data is still valid.
A reader of that scenario will immediately understand what's important about Tom (he's 16), and they don't have to continuously reference between the Scenario and Background. Other Scenarios can also use Tom and if they are interested in other aspects of his information (e.g. Address) then they can specify the relevant information Given there is a customer "Tom" who lives at 10 Downing Street.
Which approach is best depends how much of this data you've got. If it's a small number of fields across a couple of tables then put it in the Background, but once it gets to be 10+ fields or large numbers of tables (presumably we have many potential customers) then I'd suggest moving it outside the Feature file and just describing the relevant information in each Scenario.
Yes, you use a background, i.e. from https://github.com/cucumber/cucumber/wiki/Background
Background:
Given I have a data table "Employee"
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
Scenario: Some scenario 1
When ...
Scenario: Some scenario 2
Given I have another data table
| Field Name | Value |
| Brand | "Volvo" |
| City | "London" |
If ever you aren't sure I find http://www.specflow.org/documentation/Using-Gherkin-Language-in-SpecFlow/ a great resource