How to pass List of strings from Cucumber Scenario - cucumber

I need to pass the List of strings from cucumber scenario which works fine as below
Scenario Outline: Verify some scenario
Given something
When user do something
Then user should have some "<data>"
Examples: Some example
|data|
|Test1, Test2, Test3, Test4|
In the step definition I use List to retrieve the values of something variable.
But when one of the value of data variable contains comma(,) e.g. Tes,t4 it becomes complex,since it considers "Tes" and "t4" as two different values
Examples: Some example
|something|
|Test1, Test2, Test3, Tes,t4|
So is there any escape character that i can use or is there is there any other way to handle this situation

Found an easy way. Please see the below steps.
Here is my feature file.
Here is the corresponding code to map feature step with code.
Oh yes. Result is important. You can see the debug view.

This should work for you:
Scenario: Verify some scenario
Given something
When user do something
Then user should have following
| Test1 |
| Test2 |
| Test3 |
| Tes,t4|
In Step definitions
Then("^user should have following$")
public void user_should_have_following(List<String> testData) throws Throwable {
#TODO user your test data as desired
}

In Transformer of TypeRegistryConfigurer, you can do this
#Override
public Object transform(String s, Type type) {
if(StringUtils.isNotEmpty(s) && s.startsWith("[")){
s = s.subSequence(1, s.length() - 1).toString();
return Arrays.array(s.split(","));
}
return objectMapper.convertValue(s, objectMapper.constructType(type));
}

Try setting the Examples in a column, like this:
| data |
| Test1 |
| Test2 |
| Test3 |
| Tes,t4 |
This will run the scenario 4 times, expecting 'something' to change to the next value. First 'Test1', then 'Test2', etc.
In the step definition you can use that data like so:
Then(/^user should have some "([^"]*)"$/) do |data|
puts data
end
If you want to use |Test1, Test2, Test3, Tes,t4|, change the ',' to ';' ex: |Test1; Test2; Test3; Tes,t4| and in the step definition split the data:
data.split("; ") which results in ["test1", "test2", "test3", "te,st"]
Converting the data to a List (in Java):
String test = "test1; test2; test3; tes,t4";
String[] myArray = test.split("; ");
List<String> myList = new ArrayList<>();
for (String str : myArray) {
myList.add(str);
}
System.out.print(myList);
More on this here

Examples:
Colors
color-count
Red, Green
5
Yellow
8
def function("{colors}"):
context.object.colors = list(colors.split(","))
for color in context.object.colors:
print(color)

Don't put the data in your scenario. You gain very little from it, and it creates loads of problems. Instead give your data a name and use the name in the Then of your scenario
e.g.
Then the user should see something
Putting data and examples in scenarios is mostly pointless. The following apply
The data will be a duplication of what should be produced
The date is prone to typos
When the scenario fails it will be difficult to know if the code is wrong (its producing the wrong data) or the scenario is wrong (you've typed in the wrong data)
Its really hard to express complex data accurately
Nobody is really going to read your scenario carefully enough to ensure the data is accurate

Related

PowerShell on CSV file - looking for string depending on string

I need your help regarding PowerShell programming on CSV file.
I've made some searches but cannot find what I'm looking for (or perhaps I don't know the technical terms). Basically, I have an Excel workbook with large amount of data (more or less 38 columns x 350.000 rows), and there are a couple of formulas that take hours to calculate.
I was first wondering if PowerShell could speed up a bit the calculation compared to Excel. The calculations taking most of my time are in fact not that complex (at least at first glance). My data is more or less constructed like this:
Ref Title
----- --------------------------
A/001 "free_text"
A/002 "free_text A/001 free_text"
... ...
A/005 "free_text A/004 free_text"
A/006 "free_text"
B/001 "free_text"
B/002 "free_text"
C/001 "free_text"
C/002 "free_text"
...
C/050 "free_text C/047 free_text"
... ...
C/103 "free_text"
D/001 "free_text"
D/002 "free_text D/001 free_text"
... ....
Basically the data is as follows:
the Ref field contains unique values, in {letter}/{incremental value} format.
In some rows, the Title field may call up one of the Ref data. For example, in line 2, the Title calls for the A/001 Ref. In the last row, the Title calls for the D/001 Ref, etc.
There is no logic pattern defining when this ref could be called up in a title. This is random.
However, what I'm 100% sure of is the following:
The Ref called in the Title is always belonging to the same {letter} block. For example: the string 'C/047' in the Title field can only be found in the block where the Ref {letter} is C.
The Ref called in the Title will always be located 'after' (or in a lower row) than the Ref it refers to. In other words, I cannot have a line with following pattern:
Ref Title
------------ -----------------------------------------
{letter/i} {free_text {letter/j} free_text} with j<i
→ This is not possible.
→ j is always > i
I've used these characteristics in Excel to minimize my lookup arrays. But it still takes an hour to calculate everything.
I've therefore looked into PowerShell, and started to 'play' a bit with the CSV, and looping with the ForEach-Object hoping I would have quicker results. Up to now I basically ended-up looping twice on my CSV file.
$CSV1 = myfile.csv
$CSV2 = myfile.csv
$CSV1 | ForEach-Object {
# find Title
$TitSearch = $_.$Ref
$CSV2 | ForEach-Object {
if ($_.$Title -eq $TitSearch) {
myinstructions
}
}
}
It works but it's really really really long. So I then tried the following instead of using the $CSV2 | ForEach...:
$CSV | where {$_.$Title -eq $TitleSearch} | % $Ref
In either case, it's too long and not efficient at all. Additionally with these 2 solutions, I'm not using above characteristics which could reduce the lookup array and as already stated, it seems I end up looping twice on the CSV file from its beginning up to the end.
Questions:
Is there a leaner way to do this?
Am I wasting my time with PowerShell?
I though about creating 1 file per Ref {letter} block (1 file for block A, 1 for B, etc...). However I have about 50.000 blocks to create. Or create them one by one, carry out the analysis, put the results in a new file, and delete them. Would that be quicker?
Note: this is for work, to be used by other colleagues, and Excel and PowerShell are really the only softwares we may use. I know VBA but ok... At the end I'm curious about how and if this can be solved in a simple manner using PowerShell.
As far as I can see your base algorithm do N^2 iteration (~120 billion). There is a standard way to make it efficient - you need to build a hashtable first. Hashtable is a key/value storage, and look up is pretty much instantaneous, so algorithm's time complexity will become ~N.
Powershell has built-in data type for that. In your case the key would be ref, and the value an array of cell data (assuming your table is smth like: ref, title, col1, ..., colN)
$hash = #{}
foreach($row in $table} {$hash.Add($row.ref, #($row.title, $row.col1, ...)}
#it will take 350K steps to generate it
#then you can iterate over it again
foreach($key in $hash.Keys) {
$key # access current ref
$rowData = $hash.$key # access to current row elements (by index)
$refRowData = $hash[$rowData[$j]] # lookup from other rows, assuming lookup reference is in some column
}
So it's a general idea how to solve the time issue. To be honest I don't believe you need to recreate a wheel and code it yourself. What you need is a relational database. Since you have excel, you should have MS ACCESS too. Just import your data in there, make ref and title an index, then all you need to do is self join. MS Access suck, but I'm sure it will handle 350K row just fine.
Ideally you'd need to get a database on some corporate MSSQL server (open a ticket, talk to your manger, etc). It will calculate all that in seconds, and then you can link the output to a spreadsheet as well.

Spock label combinations

There | are | so many | Spock | spec examples of how to use its labels, such as:
// when -> then label combo
def "test something"() {
when:
// blah
then:
// blah blah
}
Such label combinations as:
when -> then
given -> when -> then
expect
given -> expect
But nowhere can I find documentation on what the legal/meaningful combinations of these labels are. For instance, could I have:
def "do something"() {
when:
// blah
expect:
// blah
}
Could I? I do not know. What about:
def "do something else"() {
when:
// blah
then:
// blah
expect:
// blah
where:
// blah
}
Could I? Again, I do not know. But I wonder.
I am actually an author of one of the tutorials you have mentioned. I think the best to answer your question would be to really understand what particular labels are for. Maybe then it would be more obvious why certain combinations make sense and other do not. Please refer to original documentation http://spockframework.github.io/spock/docs/1.0/spock_primer.html. It explains really well all labels and their purpose. Also it comes with this diagram:
Which I hope should already tell you a lot. So for instance having both when and expect labels makes not much sense. The expect label was designed to substitute both when and then. The reasoning behind it is well explained in the documentation:
An expect block [...] is useful in situations where it is more natural to describe stimulus and expected response in a single expression.
Spock itself will not allow you to use this combination. If you try to execute following code.
def "test"() {
when:
println("test")
expect:
1==1
}
You will receive an error message like this.
startup failed:
/Users/wooki/IdeaProjects/mylibrary/src/test/groovy/ExampleTest.groovy:
13: 'expect' is not allowed here; instead, use one of: [and, then] #
line 13, column 9.
To my surprise the second of your examples works. However I believe there is really not much gain of using expect here instead of and as I would normally do.
def "do something else"() {
when:
// stimulus
then:
// 1st verification
and:
// 2nd verification
where:
// parametrization
}
The only reason to have expect or and after then is to separate code responsible for verifying the result of executing the code inside when label. For instance to group verifications that are somehow related to each other, or even to separate single verifications but to have a possibility of giving them a description using strings that can be attached to labels in Spock.
def "do something else"() {
when:
// stimulus
then: "1 is always 1"
1 == 1
and: "2 is always 2"
2 == 2
}
Hence, since you are already stating that verification part of the test began (using then block) I would stick to and label for separating further code.
Regarding where label. It works sort of as a loop indicator. So whatever combinations of labels you use before you could always put at the end the where label. This one is not really a part of a single test run, it just enforces the test to be run a number of times.
As well it is possible to have something like this:
def "test"() {
given:
def value
when:
value = 1
then:
value == 1
when:
value = 2
then:
value == 2
}
Using the above, just make sure that your test is not "testing too much". Meaning that maybe a reason why you use two groups of when-then makes a perfect excuse for this test to be actually split into two separate tests.
I hope this answered at least some of your questions.

How to take multiple values of a column in single column field in HP loadrunner

I have one script in HP-LoadRunner, I want to take multiple values of column in a single field.
I have this:
Variable1
test1
test2
test3
test4
I am trying to do this:
Variale1
test1,test2,test3,test4
I tried with writing a 'C' code to solve this but unfortunately not able to find proper solution.
Is this possible with writing a code it will change into single column field during 1st test run and take the values from that single column field ?
Kindly Help me out, Either in terms of writing 'C' code in script or something to change in Excel/.dat file.
I think I faced the same problem so Try this:-
long fp;
int i,j;
char *SearchValue;
char ch[10];
Action()
{
fp=fopen("External file path","w");
for(i=1;i<=5;i++)
{
fputs("\"",fp);
fputs(lr_eval_string("{Internal file parameter}"),fp);
for(j=1;j<=10;j++)
{
fputs(",",fp);
fputs(lr_eval_string("{Internal file parameter}"),fp);
}
fputs("\"",fp);
fputs("\n",fp);
}
fclose(fp);
return 0;
}
This solution will help you if your "Variable1" appears only once in your script.
In your script while replacing the parameter, instead of using "{Variable1}"
use "{Variable1},{Variable1},{Variable1},{Variable1}" and in your parameter settings provide "update value on: each occurrence".

Split a string containing fixed length columns

I got data like this:
3LLO24MACT01 24MOB_6012010051700000020100510105010 123456
It contains different values for different columns when I import it.
Every column is fixed width:
Col#1 is the ID and just 1 long. Meaning it is "3" here.
Col#2 is 3 in length and here "LLO".
Col#3 is 9 in length and "24MACT01 " (notice that the missing ones gets filled up by blanks).
This goes on for 15 columns or so...
Is there a method to quickly cut it into different elements based on sequence length? I couldn't find any.
This can be done with RegEx matching, and creating an array of custom objects. Something like this:
$AllRecords = Get-Content C:\Path\To\File.txt | Where{$_ -match "^(.)(.{3})(.{9})"} | ForEach{
[PSCustomObject]#{
'Col1' = $Matches[1]
'Col2' = $Matches[2]
'Col3' = $Matches[3]
}
}
That will take each line, match by how many characters are specified, and then create an object based off those matches. It collects all objects in an array and could be exported to CSV or whatever. The 'Col1', 'Col2' etc are just generic column headers I suggested due to a lack of better information, and could be anything you wanted.
Edit: Thank you iCodez for showing me, perhaps inadvertantly, that you can specify a language for your code samples!
[Regex]::Matches will do this rather easily. All you need to do is specify a Regex pattern that has . followed by the number of characters you want in curly braces. For example, to match a column of three characters, you would write .{3}. You then do this for all 15 columns.
To demonstrate, I will use a string that contains the first three columns of your example data (since I know their sizes):
PS > $data = '3LLO24MACT01 '
PS > $pattern = '(.{1})(.{3})(.{9})'
PS > ([Regex]::Matches($data, $pattern).Groups).Value
3LLO24MACT01
3
LLO
24MACT01
PS >
Note that the first value outputted will be the text matched be all of the capture groups. If you do not need this, you can remove it with slicing:
$columns = ([Regex]::Matches($data, $pattern).Groups).Value
$columns = $columns[1..$columns.Length]
New-PSObjectFromMatches is a helper function for creating PS Objects from regex matches.
The -Debug option can help with the process of writing the regex.

Pycassa: how to query parts of a Composite Type

Basically I'm asking the same thing as in this question but for the Python Cassandra library, PyCassa.
Lets say you have a composite type storing data like this:
[20120228:finalscore] = '31-17'
[20120228:halftimescore]= '17-17'
[20120221:finalscore] = '3-14'
[20120221:halftimescore]= '3-0'
[20120216:finalscore] = '54-0'
[20120216:halftimescore]= '42-0'
So, I know I can easily slice based off of the first part of the composite type by doing:
>>> cf.get('1234', column_start('20120216',), column_finish('20120221',))
OrderedDict([((u'20120216', u'finalscore'), u'54-0'),
((u'20120216', u'halftimescore'), u'42-0')])
But if I only want the finalscore, I would assume I could do:
>>> cf.get('1234', column_start('20120216', 'finalscore'),
column_finish('20120221', 'finalscore'))
To get:
OrderedDict([((u'20120216', u'finalscore'), u'54-0')])
But instead, I get:
OrderedDict([((u'20120216', u'finalscore'), u'54-0'),
((u'20120216', u'halftimescore'), u'42-0')])
Same as the 1st call.
Am I doing something wrong? Should this work? Or is there some syntax using the cf.get(... columns=[('20120216', 'finalscore')]) ? I tried that too and got an exception.
According to http://www.datastax.com/dev/blog/introduction-to-composite-columns-part-1, I should be able to do something like this...
Thanks
If know all the components of the composite column then you should the 'columns' option:
cf.get('1234', columns=[('20120216', 'finalscore')])
You said you got an error trying to do this, but I would suggest trying again. It works fine for me.
When you are slicing composite columns you need to think about how they are sorted. Composite columns sort starting first with the left most component, and then sorting each component toward the right. So In your example the columns would look like this:
+------------+---------------+------------+---------------+------------+----------------+
| 20120216 | 20120216 | 20120221 | 20120221 | 20120228 | 20120228 |
| finalscore | halftimescore | finalscore | halftimescore | finalscore | halftimescore |
+------------+---------------+------------+---------------+------------+----------------+
Thus when you slice from ('20120216', 'finalscore') to ('20120221', 'finalscore') you get both values for '20120216'. To make your query work as you want it to you could change the column_finish to ('20120216', 'halftimescore').

Resources