White spaces in data table not working properly in cucumber - cucumber

Not getting the input request properly if the given input string contains space in-between in cucumber data table. Also, I did delete the trailing and leading space to ensure there are no hidden illegal characters but still received the inputs incorrectly in java code. Any advice would certainly help.
Feature:Foo
#foo
Scenario Outline: sample run
Then send request <aoo> <boo> <coo> <doo>
Examples:
| aoo | boo | coo | doo |
| 200 | xyx | Do not disturb | true |
#Then("^send request (.*) (.*) (.*) (.*)$")
public void send_request(String aoo, String boo, String coo, String doo) throws Throwable {
System.out.println("aoo " + aoo);
System.out.println("boo " + boo);
System.out.println("coo " + coo);
System.out.println("doo " + doo);
}
Expected Output:-
aoo 200
boo xyx
coo Do not disturb
doo true
Actual Output:-
aoo 200 xyz Do
boo not
coo disturb
doo true

(.*) is greedy and matches any character. You have to limit them. If possible try to use other data types. aoo -> int, doo -> boolean
#foo
Scenario Outline: sample run
Then send request "<aoo>" "<boo>" "<coo>" "<doo>"
Examples:
| aoo | boo | coo | doo |
| 200 | xyx | Do not disturb | true |
#Then("^send request \"(.*)\" \"(.*)\" \"(.*)\" \"(.*)\"$")
public void send_request(String aoo, String boo, String coo, String doo) throws Throwable {
// omitted code
}

Related

Comparing elements from 2 list kotlin

im having 2 list of different variable, so i want to compare and update the 'Check' value from list 2 if the 'Brand' from list 2 is found in list 1
-------------------- --------------------
| Name | Brand | | Brand | Check |
-------------------- --------------------
| vga x | Asus | | MSI | X |
| vga b | Asus | | ASUS | - |
| mobo x | MSI | | KINGSTON | - |
| memory | Kingston| | SAMSUNG | - |
-------------------- --------------------
so usually i just did
for(x in list1){
for(y in list2){
if(y.brand == x.brand){
y.check == true
}
}
}
is there any simple solution for that?
Since you're mutating the objects, it doesn't really get any cleaner than what you have. It can be done using any like this, but in my opinion is not any clearer to read:
list2.forEach { bar ->
bar.check = bar.check || list1.any { it.brand == bar.brand }
}
The above is slightly more efficient than what you have since it inverts the iteration of the two lists so you don't have to check every element of list1 unless it's necessary. The same could be done with yours like this:
for(x in list2){
for(y in list1){
if(y.brand == x.brand){
x.check = true
break
}
}
}
data class Item(val name: String, val brand: String)
fun main() {
val list1 = listOf(
Item("vga_x", "Asus"),
Item("vga_b", "Asus"),
Item("mobo_x", "MSI"),
Item("memory", "Kingston")
)
val list2 = listOf(
Item("", "MSI"),
Item("", "ASUS"),
Item("", "KINGSTON"),
Item("", "SAMSUNG")
)
// Get intersections
val intersections = list1.map{it.brand}.intersect(list2.map{it.brand})
println(intersections)
// Returns => [MSI]
// Has any intersections
val intersected = list1.map{it.brand}.any { it in list2.map{it.brand} }
println(intersected)
// Returns ==> true
}
UPDATE: I just see that this isn't a solution for your problem. But I'll leave it here.

processing network packets in spark in a stateful manner

I would like to use Spark to parse network messages and group them into logical entities in a stateful manner.
Problem Description
Let's assume each message is in one row of an input dataframe, depicted below.
| row | time | raw payload |
+-------+------+---------------+
| 1 | 10 | TEXT1; |
| 2 | 20 | TEXT2;TEXT3; |
| 3 | 30 | LONG- |
| 4 | 40 | TEXT1; |
| 5 | 50 | TEXT4;TEXT5;L |
| 6 | 60 | ONG |
| 7 | 70 | -TEX |
| 8 | 80 | T2; |
The task is to parse the logical messages in the raw payload, and provide them in a new output dataframe. In the example each logical message in the payload ends with a semicolon (delimiter).
The desired output dataframe could then look as follows:
| row | time | message |
+-------+------+---------------+
| 1 | 10 | TEXT1; |
| 2 | 20 | TEXT2; |
| 3 | 20 | TEXT3; |
| 4 | 30 | LONG-TEXT1; |
| 5 | 50 | TEXT4; |
| 6 | 50 | TEXT5; |
| 7 | 50 | LONG-TEXT2; |
Note that some messages rows do not yield a new row in the result (e.g. rows 4, 6,7,8), and some yield even multiple rows (e.g. rows 2, 5)
My questions:
is this a use case for UDAF? If so, how for example should i implement the merge function? i have no idea what its purpose is.
since the message ordering matters (i cannot process LONGTEXT-1, LONGTEXT-2 properly without respecting the message order), can i tell spark to parallelize perhaps on a higer level (e.g. per calendar day of messages) but not parallelize within a day (e.g. events at time 50,60,70,80 need to be processed in order).
follow up question: is it conceivable that the solution will be usable not just in traditional spark, but also in spark structured streaming? Or does the latter require its own kind of stateful processing method?
Generally, you can run arbitrary stateful aggregations on spark streaming by using mapGroupsWithState of flatMapGroupsWithState. You can find some examples here. None of those though will guarantee that the processing of the stream will be ordered by event time.
If you need to enforce data ordering, you should try to use window operations on event time. In that case, you need to run stateless operations instead, but if the number of elements in each window group is small enough, you can use collectList for instance and then apply a UDF (where you can manage the state for each window group) on each list.
ok i figured it out in the meantime how to do this with an UDAF.
class TagParser extends UserDefinedAggregateFunction {
override def inputSchema: StructType = StructType(StructField("value", StringType) :: Nil)
override def bufferSchema: StructType = StructType(
StructField("parsed", ArrayType(StringType)) ::
StructField("rest", StringType)
:: Nil)
override def dataType: DataType = ArrayType(StringType)
override def deterministic: Boolean = true
override def initialize(buffer: MutableAggregationBuffer): Unit = {
buffer(0) = IndexedSeq[String]()
buffer(1) = null
}
def doParse(str: String, buffer: MutableAggregationBuffer): Unit = {
buffer(0) = IndexedSeq[String]()
val prevRest = buffer(1)
var idx = -1
val strToParse = if (prevRest != null) prevRest + str else str
do {
val oldIdx = idx;
idx = strToParse.indexOf(';', oldIdx + 1)
if (idx == -1) {
buffer(1) = strToParse.substring(oldIdx + 1)
} else {
val newlyParsed = strToParse.substring(oldIdx + 1, idx)
buffer(0) = buffer(0).asInstanceOf[IndexedSeq[String]] :+ newlyParsed
buffer(1) = null
}
} while (idx != -1)
}
override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
if (buffer == null) {
return
}
doParse(input.getAs[String](0), buffer)
}
override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = throw new UnsupportedOperationException
override def evaluate(buffer: Row): Any = buffer(0)
}
Here a demo app the uses the above UDAF to solve the problem from above:
case class Packet(time: Int, payload: String)
object TagParserApp extends App {
val spark, sc = ... // kept out for brevity
val df = sc.parallelize(List(
Packet(10, "TEXT1;"),
Packet(20, "TEXT2;TEXT3;"),
Packet(30, "LONG-"),
Packet(40, "TEXT1;"),
Packet(50, "TEXT4;TEXT5;L"),
Packet(60, "ONG"),
Packet(70, "-TEX"),
Packet(80, "T2;")
)).toDF()
val tp = new TagParser
val window = Window.rowsBetween(Window.unboundedPreceding, Window.currentRow)
val df2 = df.withColumn("msg", tp.apply(df.col("payload")).over(window))
df2.show()
}
this yields:
+----+-------------+--------------+
|time| payload| msg|
+----+-------------+--------------+
| 10| TEXT1;| [TEXT1]|
| 20| TEXT2;TEXT3;|[TEXT2, TEXT3]|
| 30| LONG-| []|
| 40| TEXT1;| [LONG-TEXT1]|
| 50|TEXT4;TEXT5;L|[TEXT4, TEXT5]|
| 60| ONG| []|
| 70| -TEX| []|
| 80| T2;| [LONG-TEXT2]|
+----+-------------+--------------+
the main issue for me was to figure out how to actually apply this UDAF, namely using this:
df.withColumn("msg", tp.apply(df.col("payload")).over(window))
the only thing i need now to figure out are the aspects of parallelization (which i only want to happen where we do not rely on ordering) but that's a separate issue for me.

cucumber table with valid and invalid input

I have that kind of test which works:
Feature: TestAddition
Scenario Outline: "Addition"
Given A is <A> and B is <B>
Then A + B is <result>
Examples:
| A | B | result |
| 3 | 4 | 7 |
| 2 | 5 | 7 |
| 1 | 4 | 5 |
And thats the glue code:
package featuresAdditions;
import org.junit.Assert;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import math.AdditionEngine;
public class step {
private AdditionEngine testAdditionEngine;
private double resultAddition;
#Given("^A is (\\d+) and B is (\\d+)$")
public void addition(int arg1, int arg2) throws Throwable {
testAdditionEngine = new AdditionEngine();
resultAddition = testAdditionEngine.calculateAdditionAmount(arg1, arg2);
}
#Then("^A + B is (.)$")
public void addition(double arg1) throws Throwable {
Assert.assertEquals(arg1, resultAddition, 0.01);
}
}
However I would like to know how to create an invalid table example [where ?? means I do not know what to put in the below table]
Examples:
| A | B | result |
| "é3-3" | 5 | ?? |
| "é3-3" | "aB" | ?? |
This should give a java.lang.NumberFormatException
In pure jUnit I would do something like the code below which works like a charm [with #Test(expected = NumberFormatException.class)]. However, I have to use Cucumber... Someone can tell me how to perform such a test with Cucubmer?
public class test {
AdditionEngine testAdditionEngine = new AdditionEngine();
#Test(expected = NumberFormatException.class)
public void test() {
testAdditionEngine.calculateAdditionAmount("é3-3", 5);
}
}
Scenario Outline: "Invalid Addition"
Given A is <A> and B is <B>
Then A + B is <result>
Examples:
| A | B | result |
| "é3-3" | 5 | java.lang.NumberFormatException |
| "é3-3" | "aB" | java.lang.NumberFormatException |
Change the stepdefinition to take a String as an argument instead of Integer.
private Exception excep;
#Given("^A is (.*?) and B is (.*?)$")
public void addValid(String arg1, String arg2) {
try {
testAdditionEngine = new AdditionEngine();
testAdditionEngine.calculateAdditionAmount(arg1, arg2);
} catch (NumberFormatException e) {
excep = e;
}
};
#Then("^A \\+ B is (.*?)$")
public void validResult(String arg1){
assertEquals(arg1, excep.getClass().getName());
};
You will get an ambiguous step message if you are on Cucumber 2 and above. This will be because the valid scenariooutline will match the integer and string stepdefinitions. Change either one of the scenario statements.

Cannot resolve constructor 'Stage(com.badlogic.gdx.utils.viewport.Viewport, com.badlogic.gdx.graphics.g2d.SpriteBatch)'

I am beginner in libgdx . When trying to make a game I get this error in Android Studio :
Error:(39, 16) Gradle: error: no suitable constructor found for
Stage(Viewport,SpriteBatch) constructor Stage.Stage() is not
applicable (actual and formal argument lists differ in length)
constructor Stage.Stage(StageStyle) is not applicable (actual and
formal argument lists differ in length)
-
public class Hud {
| public Stage stage;
| private Viewport viewport;
|
| private Integer worldTimer;
| private float timeCount;
| private Integer score;
|
| Label countdownLabel;
| Label scoreLabel;
| Label timeLabel;
| Label levelLabel;
| Label worldLabel;
| Label snakeLabel;
|
| public Hud(SpriteBatch sb) {
| | worldTimer = 300;
| | timeCount = 0;
| | score = 0;
| |
| | viewport = new FitViewport(Snake.V_WIDTH, Snake.V_HEIGHT, new OrthographicCamera());
| | stage = new Stage(viewport,sb);
| }
}
Here is the error:
stage = new Stage(viewport,sb);
I searched the internet some solution but I have not found anything. I'm a little lost.
Excuse me for my bad english :)
I hope you can help me . I will be grateful.
You have the wrong Stage class imported, it should be:
com.badlogic.gdx.scenes.scene2d.Stage
But you probably have this or some other package:
javafx.stage.Stage

Filtering a collection with LINQ to SQL, based on condition that involves other rows [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I am getting a dataset using LINQ to SQL. I need to filter this dataset such that:
If a field with a null SourceName exists and there's at least one other record for this field with a non-null SourceName, then it should be removed.
If it is the only row for that 'Field', then it should remain in the list.
Here's an example data: Data consists of 3 columns: 'Field', 'SourceName' and 'Rate'
Field | SourceName | Rate
10 | s1 | 9
10 | null | null
11 | null | null
11 | s2 | 5
11 | s3 | 4
12 | null | null
13 | null | null
13 | s4 | 7
13 | s5 | 8
8 | s6 | 2
9 | s7 | 23
9 | s8 | 9
9 | s9 | 3
Output should look like:
Field | SourceName | Rate
10 | s1 | 9
11 | s2 | 5
11 | s3 | 4
12 | null | null // <- (remains since there's only
13 | s4 | 7 // 1 record for this 'Field')
13 | s5 | 8
8 | null | null
9 | s8 | 9
9 | s9 | 3
How do I filter it?
What you are trying to achieve is not trivial and can't be solved with just a .Where() clause. Your filter criteria depends on a condition that requires grouping, so you will have to .GroupBy() and then flatten that collection of collections using .SelectMany().
The following code satisfies your expected output using LINQ to Objects, and I don't see any reason for LINQ to SQL not to be able to translate it to SQL, haven't tried that tough.
//Group by the 'Field' field.
yourData.GroupBy(x => x.Field)
//Project the grouping to add a new 'IsUnique' field
.Select(g => new {
SourceAndRate = g,
IsUnique = g.Count() == 1,
})
//Flatten the collection using original items, plus IsUnique
.SelectMany(t => t.SourceAndRate, (t, i) => new {
Field = t.SourceAndRate.Key,
SourceName = i.SourceName,
Rate = i.Rate,
IsUnique = t.IsUnique
})
//Now we can do the business here; filter nulls except unique
.Where(x => x.SourceName != null || x.IsUnique);
Use Linq's built in 'Where' clause with a lambda continuation:
Simple static example of using lambda's and a simple POCO class to store the data in a list like yours:
using System;
using System.Collections.Generic;
using System.Linq;
namespace Simple
{
class Program
{
class Data
{
public string Field { get; set; }
public string SourceName { get; set; }
public string Rate { get; set; }
}
static List<Data> Create()
{
return new List<Data>
{
new Data {Field = "10", SourceName = null, Rate = null},
new Data {Field = "11", SourceName = null, Rate = null},
new Data {Field = "11", SourceName = "s2", Rate = "5"}
};
}
static void Main(string[] args)
{
var ls = Create();
Console.WriteLine("Show me my whole list: \n\n");
// write out everything
ls.ForEach(x => Console.WriteLine(x.Field + "\t" + x.SourceName + "\t" + x.Rate + "\n"));
Console.WriteLine("Show me only non nulls: \n\n");
// exclude some things
ls.Where(l => l.SourceName != null)
.ToList()
.ForEach(x => Console.WriteLine(x.Field + "\t" + x.SourceName + "\t" + x.Rate + "\n"));
Console.ReadLine();
}
}
}

Resources