unable to read data from csv file using SerenityParameterizedRunner - cucumber

I am using serenitybdd to load the data from csv file but my code is unable to fetch the values from csv . Its showing null values for both xyz and abc when i am trying to print in #test metho i_setup_the_request_fields() below. What did i do wrong here?
Here is the code of java and csv file.
#RunWith(SerenityParameterizedRunner.class)
#UseTestDataFrom(value="template/test/data/response/test.csv")
public class TestCustomSteps {
private String abc;
private String xyz;
#Steps
RestAssuredSteps restAssuredSteps;
public void setAbc(String abc) {
this.abc = abc;
}
public void setXyz(String xyz) {
this.xyz = xyz;
}
#Qualifier
public String qualifier() {
return abc + "=>" + xyz;
}
#Test
#Given("I setup the request fields")
public void i_setup_the_request_fields() {
// Write code here that turns the phrase above into concrete actions
System.out.println(abc+"--"+xyz);
Map<String,String> mapData = new HashMap();
mapData.put("abc",abc);
mapData.put("xyz",xyz);
restAssuredSteps.setRequestFields(mapData);
}
}
and csv file
abc,xyz
6543210987654321,10000
6543210987654320,10000

A few things you can try one at a time:
You setter methods are not named the same as your private variables:
Try changing your variable names to cloakPan and memberId or change your setter methods to match your variable names.
#UseTestDataFrom(value="template/test/data/response/test.csv")
Maybe hard-code the full path to the file name (from root) just to make sure it is looking in the right place.
There is an example here, copy that to see if that works - http://thucydides.info/docs/thucydides/_data_driven_testing_using_csv_files.html

Related

How to fix irregular behavior of string value given by setup method of mapper in mapreduce?

I'm very new to MapReduce and was learning about the implementation of the setup method. The new string value given by configuration is printing correctly but when I tried to further process it, the initial value of the string comes in action. I know the string is immutable, but it should provide the value currently pointing to, to other methods.
public class EMapper extends Mapper<LongWritable, Text, Text, Text> {
String wordstring = "abcd"; //initialized wordstring with "abcd"
public void setup(Context context) {
Configuration config = new Configuration(context.getConfiguration());
wordstring = config.get("mapper.word"); // As string is immutable,
// wordstring should now point to
// value given by mapper.word
//Here mapper.word="ankit" by
//using -D in hadoop command
}
String def = wordstring;
String jkl = String.valueOf(wordstring); //tried to copy current value
//but
//string jkl prints the initial
/value.
public void map(LongWritable key, Text value, Context context)
throws InterruptedException, IOException {
context.write(new Text("wordstring=" + wordstring + " " + "def=" +
def),
new Text("jkl=" + jkl));
}
}
public class EDriver extends Configured implements Tool {
private static Logger logger = LoggerFactory.getLogger(EDriver.class);
public static void main(String[] args) throws Exception {
logger.info("Driver started");
int res = ToolRunner.run(new Configuration(), new EDriver(), args);
System.exit(res);
}
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.err.printf("Usage: %s needsarguments",
getClass().getSimpleName());
return -1;
}
Configuration conf = getConf();
Job job = new Job(conf);
job.setJarByClass(EDriver.class);
job.setJobName("E Record Reader");
job.setMapperClass(EMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setReducerClass(EReducer.class);
job.setNumReduceTasks(0);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setInputFormatClass(ExcelInputFormat.class);
return job.waitForCompletion(true) ? 0 : 1;
}
}
I expected output to be
wordstring=ankit def=ankit jkl=ankit
Actual output is
wordstring=ankit def=abcd jkl=abcd
This has nothing to do with the mutability of Strings, and everything to do with code execution order.
Your setup method will only be called after any class-level commands are executed. The order you write the code doesn't change anything. If you were to re-write the top section of your code in the order that it actually executes, you'd have:
public class EMapper extends Mapper<LongWritable, Text, Text, Text> {
String wordstring = "abcd";
String jkl = String.valueOf(wordstring);
public void setup(Context context) {
Configuration config = new Configuration(context.getConfiguration());
wordstring = config.get("mapper.word"); //By the time this is called, jkl has already been assigned to "abcd"
}
So it's not surprising that jkl is still abcd. You should set jkl within the setup method, like so:
public class EMapper extends Mapper<LongWritable, Text, Text, Text> {
String wordstring;
String jkl;
public void setup(Context context) {
Configuration config = new Configuration(context.getConfiguration());
wordstring = config.get("mapper.word");
jkl = wordstring;
//Here, jkl and wordstring are both different variables pointing to "ankit"
}
//Here, jkl and wordstring are null, as setup(Context context) has not yet run
public void map(LongWritable key, Text value, Context context)
throws InterruptedException, IOException {
//Here, jkl and wordstring are both different variables pointing to "ankit"
context.write(new Text("wordstring=" + wordstring),
new Text("jkl=" + jkl));
}
Of course you don't actually need jkl, you can just directly use wordstring.
The problem has been solved. Actually, I was running Hadoop in the distributed mode where SETUP, MAPPER, REDUCER, and CLEANUP run on different JVMs. So, data cannot be transported from SETUP to MAPPER directly. The first wordstring object was initialized to "abcd" in mapper. I tried to change the wordstring in SETUP(another object of wordstring was created) which was actually taking place in another JVM.
So, when I tried to copy the "wordstring" in jkl at
String jkl = String.valueOf(wordstring);
the first value of wordstring(the one created by mapper &initialized to "abcd") was being copied to jkl.
If I run Hadoop in standalone mode, it will use a single JVM and the value given to wordstring by SETUP would have been copied to jkl.
Thus, jkl got the copy of wordstring initialized to "abcd" and not the one given by SETUP.
I used
HashMap map=new HashMap();
to transport data between SETUP to MAPPER, and then jkl got a copy of value given by wordstring of SETUP.

Groovy: Proper way to create/use this class

I am loading a groovy script and I need the following class:
import java.util.function.Function
import java.util.function.Supplier
class MaskingPrintStream extends PrintStream {
private final Function<String,String> subFunction
private final Supplier<String> secretText
public MaskingPrintStream(PrintStream out, Supplier<String> secretText, Function<String,String> subFunction) {
super(out);
this.subFunction = subFunction;
this.secretText = secretText;
}
#Override
public void write(byte b[], int off, int len) {
String out = new String(b,off,len);
String secret = secretText.get();
byte[] dump = out.replace(secret, subFunction.apply(secret)).getBytes();
super.write(dump,0,dump.length);
}
}
right now I have it in a file called MaskingPrintStream.groovy. But, by doing this I effectively can only access this class as an inner class of the class that gets created by default that corresponds to the file name.
What I want to work is code more like this:
def stream = evaluate(new File(ClassLoader.getSystemResource('MaskingPrintStream.groovy').file))
But as you can see, I need to give it some values before it's ready. Perhaps I could load the class into the JVM (not sure how from another groovy script) and then instantiate it the old-fashioned way?
The other problem is: How do I set it up so I don't have this nested class arrangement?
groovy.lang.Script#evaluate(java.lang.String) has a bit different purpose.
For your case you need to use groovy.lang.GroovyClassLoader that is able to parse groovy classes from source, compile and load them. Here's example for your code:
def groovyClassLoader = new GroovyClassLoader(this.class.classLoader) // keep for loading other classes as well
groovyClassLoader.parseClass('MaskingPrintStream.groovy')
def buf = new ByteArrayOutputStream()
def maskingStream = new MaskingPrintStream(new PrintStream(buf), { 'secret' }, { 'XXXXX' })
maskingStream.with {
append 'some text '
append 'secret '
append 'super-duper secret '
append 'other text'
}
maskingStream.close()
println "buf = ${buf}"
And the output it produces in bash shell:
> ./my-script.groovy
buf = some text XXXXX super-duper XXXXX other text

Random string in cucumber scenarios

I am testing a GUI using cucumber. I need to test CRUD operations of the GUI.
When I write a scenario to create a new entity in GUI, I am unable to run multiple times, since the second time scenario fails because the ID I specified for the entity already exists (created in the first run) in the system the second time I run the test.
The system I am testing doesn't allow deleting entities. System needs to be started in a special mode to delete entities, so deleting the entity created after the test is not an option.
It would be great if I could use a random number for the entity id. For an example:
when user creates a new Branch with following values:
|Branch ID|<random_string_1>|
|Address|1, abc, def.|
|Telephone|01111111111|
And user searches for a branch by "Branch ID" = "<random_string_1>"
Then branch details should be as following
|Branch ID|<random_string_1>|
|Address|1, abc, def.|
|Telephone|01111111111|
Is there an option in cucumber to do something like this? Or, is there any other way I can achieve this?
In the end, I've added RandomStringTransformer class to test suite:
public class RandomStringTransformer extends Transformer<String> {
private static final Map<String, String> RANDOM_STRINGS = new HashMap<>(); //Key -> random string
public static final RandomStringTransformer INSTANCE = new RandomStringTransformer();
#Override
public String transform(String input) {
return transformString(input);
}
public DataTable transform(DataTable dataTable) {
dataTable.getGherkinRows().forEach(dataTableRow -> dataTableRow.getCells().replaceAll(this::transformString));
return dataTable;
}
private String transformString(String input) {
final String[] inputCopy = {input};
Map<String, String> replacements = new HashMap<>();
Matcher matcher = Pattern.compile("(<random_string_[^>]*>)").matcher(input);
while (matcher.find()) {
String group = matcher.group(0);
replacements.put(group, RANDOM_STRINGS.computeIfAbsent(group, key -> Utilities.getNextUniqueString()));
}
replacements.forEach((key, value) -> inputCopy[0] = inputCopy[0].replace(key, value));
return inputCopy[0];
}
}
And used the transformer in step definition:
#When("^user creates a branch of name "([^"]*)" with following values$")
public void branchIsCreatedWithDetails(#Transform(RandomStringTransformer.class) String branchName, DataTable fieldValues) {
fieldValues = RandomStringTransformer.INSTANCE.transform(fieldValues);
//Now, fieldValues table values and branchName are replaced with random values if they were in format <random_string_SOMETHING>
}
The #Transform annotation is not supported in Cucumber 3 anymore.
You have to transform data manually in the method body.
#When("^user creates a branch of name "([^"]*)" with following values$")
public void branchIsCreatedWithDetails(String branchName, DataTable fieldValues) {
fieldValues = RandomStringTransformer.INSTANCE.transform(fieldValues);
//Now, fieldValues table values and branchName are replaced with random values if they were in format <random_string_SOMETHING>
}
Read this for more information to migrate: http://grasshopper.tech/98/

Passing dynamically generated value to NUnit Custom Attribute

For our test scenarios - based on configuration of the application, we may want to either enable or disable a scenario. For this purpose, I created a custom IgnoreIfConfig Attribute like this :
public class IgnoreIfConfigAttribute : Attribute, ITestAction
{
public IgnoreIfConfigAttribute(string config)
{
_config = config;
}
public void BeforeTest(ITest test)
{
if (_config != "Enabled") NUnit.Framework.Assert.Ignore("Test is Ignored due to Access level");
}
public void AfterTest(ITest test)
{
}
public ActionTargets Targets { get; private set; }
public string _config { get; set; }
}
Which can be used as follows :
[Test, Order(2)]
[IgnoreIfConfig("Enabled")] //Config.Enabled.ToString()
public void TC002_DoTHisIfEnabledByConfig()
{
}
Now This attribute would only take a constant string as an input. If I were to replace this with something generated dynamically at the runtime, Such as a value from Json file - How can I convert it to a Constant. Constant Expression, TypeOf Expression or Array Creation Expression of Attribute parameter type ? Such as Config.Enabled ?
You can't do as you asked but you can look at the problem differently. Just give the attribute the name of some property in the JSON file to be examined, e.g. "Config".
As per Charlie's suggestion : I implemented it like this -
PropCol pc = new PropCol(); // Class where the framework reads Json Data.
public IgnoreIfConfigAttribute(string config)
{
pc.ReadJson();
if(config = "TestCase") _config = PropCol.TestCase;
// Here TestCase is a Json element which is either enabled or disabled.
}

Cassandra Query column mapping

I have a Cassandra table trans_by_date with columns origin, tran_date (and some other columns). I try to run the below code get error:
java.util.NoSuchElementException: Columns not found in table trans.trans_by_date : TRAN_DATE. The column does exist.
Any syntax gotcha?
JavaRDD<TransByDate> transDateRDD = javaFunctions(sc)
.cassandraTable("trans", "trans_by_date", CassandraJavaUtil.mapRowTo(TransByDate.class))
.select(CassandraJavaUtil.column("origin"), CassandraJavaUtil.column("TRAN_DATE").as("transdate"));
public static class TransByDate implements Serializable {
private String origin;
private Date transdate;
public String getOrigin() { return origin; }
public void setOrigin(String id) { this.origin = id; }
public Date getTransdate() { return transdate; }
public void setTransdate(Date trans_date) { this.transdate = trans_date; }
}
Thanks
If you change CassandraJavaUtil.column("TRAN_DATE") to CassandraJavaUtil.column("tran_date"), i.e. only use lower-case column names, your code should work.
It seems that the CassandraJavaUtil puts the column name into double quotes when creating the select query.
See the following link for uppercase and lowercase handling in cassandra:
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/ucase-lcase_r.html

Resources