Publishing Karate test report to Cucumber Open Report [duplicate] - cucumber

We are using Karate heavily for various projects and though the report generated using karate Reports are more than anyone would need. I am still interested in getting Allure integrated in the mix.
Added allure-junit4 dependency and added allure listener
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.20</version>
<configuration>
<argLine>
-javaagent:"${settings.localRepository}/org/aspectj/aspectjweaver/${aspectj.version}/aspectjweaver-${aspectj.version}.jar"
<!-- -Dcucumber.options="--plugin io.qameta.allure.cucumberjvm.AllureCucumberJvm"-->
</argLine>
<properties>
<property>
<name>listener</name>
<value>io.qameta.allure.junit4.AllureJunit4</value>
</property>
</properties>
</configuration>
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
<version>${aspectj.version}</version>
</dependency>
</dependencies>
</plugin>
Now allure-results is getting created and I can see report but it's almost blank.
How can I get allure report generated on karate based project?

If Allure supports the Cucumber JSON output it should work. Else suggest you take this up with the Allure team.
You can refer to this thread (for Extent): https://github.com/intuit/karate/issues/619
EDIT: Since I refer anyone asking about extending / custom reports to this answer, read on.
In Karate 1.0 onwards, the Results object can be used to get all data about the test results. Also multiple JSON files will be output to the <build>/karate-reports. You can even re-try some tests and merge the results: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#retry-framework-experimental
Also please be aware of changes to the Java hooks, it is called RuntimeHook now: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

Related

how to control the number of threads for execution in cucumber testNG parallel

I'm trying to run 2 Cucumber tests in parallel and sequential using TestNG and SpringBootTest but when my tests execute the following happens
mvn test
2 browsers open and both navigate to the Wikipedia homepage.
if you add 2 more scenarios it opens those many threads per scenario, I don't have any control over the number of threads to execute.
How to control the number of threads and dataprovider count, any help is appreciated.
Repo : https://github.com/cmccarthyIrl/spring-cucumber-testng-parallel-test-harness
The possible reason is, the runner you are using converts it into testng data driven test with single test with scenarios from each feature file supplied through data-provider. This is not a right approach. However, in testng there is separate property to set thread count for data driven test. You can set data-provider-thread-count in xml configuration file at suite lever or can pass command-line argument -dataproviderthreadcount to specify number of threads.
Better approach
You can look into another library qaf-cucumber with native testng implementation. It is considering each scenario as testng test method gives more control and utilization of each feature of testng. With this library, only scenario with examples are converted as testng data driven test.
You don't need to have additional class to run test. Just use factory available class to have different configuration combinations. Here is sample configuration file:
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="QAF Demo" verbose="1">
<test name="Web-Suite" enabled="true">
<classes>
<class name="com.qmetry.qaf.automation.cucumber.runner.CucumberScenarioFactory" />
</classes>
</test>
</suite>
Note: As of today qaf-cucumber supports cucumber 5.x
TestNG runs dataproviders with a parallism of 10 by default. You can tell Maven to tell TestNG to use fewer threads.
https://github.com/cucumber/cucumber-jvm/tree/main/testng#parallel-execution
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<properties>
<property>
<name>dataproviderthreadcount</name>
<value>${threadcount}</value>
</property>
</properties>
</configuration>
</plugin>
</plugins>

Error while read or write Parquet format data

I have created an external table pointing to Azure ADLS with parquet storage and while inserting the data to that table I am getting the below error. I am using Databricks for the execution
org.apache.spark.sql.AnalysisException: Multiple sources found for parquet (org.apache.spark.sql.execution.datasources.v2.parquet.ParquetDataSourceV2, org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat), please specify the fully qualified class name.;
This was perfectly working fine yesterday and I have started getting this error from today.
I couldn't find any answer in the internet on why is this happenning.
This issue has been fixed, the reason for the error was, we installed spark sqldb connector provided by Azure with uber jar which also got dependencies wrt parquet file formatter.
If you want a workaround without cleaning up dependencies. Here is how you choose one of the sources (exemplified with "org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat"):
Replace:
spark.read.parquet("<path_to_parquet_file>")
With
spark.read.format("org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat").load("<path_to_parquet_file>")
You may have more than 1 jar file in spark/jars/ directory for example -
spark-sql_2.12-2.4.4 and spark-sql_2.12-3.0.3 which may lead to multiple class issue.
I had a similar issue, which is Caused by Jar package dependency conflict. I use maven to package my spark jar with maven-shade-plugin, the plugin exclude the conflicting jar. And it works for me.
this is the code of pom.xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.4.0</version>
<configuration>
<artifactSet>
<excludes>
<exclude>org.scala-lang:*:*</exclude>
<exclude>org.apache.spark:*:*</exclude>
</excludes>
</artifactSet>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<finalName>spark_anticheat_shaded</finalName>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>

Static metamodel class is not generated

I just started learning and using Jhipster. I have a question about JPA Static metamodel generation. The following is what I have done according to the Jhipster website but the static matemodel class(Class X_) is not generated:
I created an entity called: SalesByDepartment. After this entity generated, I changed its JOSN file from folder:.jhipster under my project folder by setting service to serviceImpl from no, and jpaMetamodelFiltering to true. My understanding is that I need to re-run entity sub-generator to regenerate the same entity to enable Filtering feature after I've done this change to this entity's JSON file. However, I only can find 'SalesByDepartmentCriteria' and 'SalesByDepartmentQueryService'. There is no class 'SalesByDepartment_' under the domain package. I also checked pom.xml and I can find the plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct-processor</artifactId>
<version>${mapstruct.version}</version>
</path>
<!-- For JPA static metamodel generation -->
<path>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
<version>${hibernate.version}</version>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
May I know if anything else I have missed to generate 'SalesByDepartment_' under domain package?
Thank you for the help.
By the way, it worked fine when I generated the first project. I did the same way and static metamodel classes were created automatically under project folder: 'com.xxx.domain'. I also can find them under target folder after build process with Maven. I guess there are something wrong but still have no idea why is that. Below is the screen shot for two projects that I have created using 'jhipster'. A is the previous project which I could generate static metamodel, but B doesn't work:
enter image description here
I had this problem too, the best way that I found for myself - add the dependency to maven and to annotation processor path
<dependencies>
...
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
<version>${hibernate.version}</version>
</dependency>
</dependencies>
annotation processor
<build>
<plugins>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>11</source>
<target>11</target>
<annotationProcessorPaths>
<path>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
<version>${hibernate.version}</version>
</path>
...
</annotationProcessorPaths>
</configuration>
</plugin>
</plugins>
</build>
Hope it helps somebody )
JPA Static metamodel is generated by build process (maven or gradle) as explained in JHipster doc so you just have to build your app and you'll find SalesByDepartment_.java under target for maven and under build for gradle.
In my case the problem was a problem in services that the compiler don't notice.
I change a service class from implemented service to service so the implementation class was still existing, i erase that file an everything works fine.

Spark: Avoiding Namespace Conflict when building modified spark

I am building a custom spark into a jar file. And I want to use that while using the default spark build.
How do I change the namespace from org.apache.spark.allOfSpark into org.another.spark.allOfSpark without going through all files?
I want to do this in order to avoid conflict when importing modules. Thanks in advance.
Depending on the build tool you are using, you could use Maven's relocation feature to move your custom spark into a new package at build-time. There are similar features in sbt and other build tools.
If you specify what you are using to build your project, I can further help on your issue.
-- UPDATE
Here is a sample code for your pom.xml that should help you getting started :
<project>
<!-- Your project definition here, with the groupId, artifactId, and it's dependencies -->
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>org.apache.spark</pattern>
<shadedPattern>shaded.org.apache.spark</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
This will effectively move all of Spark into a new package called shaded.org.apache.spark when you package your application (when you ask Maven to produce a jar).
If you need to exclude certain packages, you can use the <exclude> tag as shown in the link of Maven's relocation.
If what you are trying to achieve is simply to customize some parts of Spark, I would advise you to either fork Spark's code and directly rewrite parts of MLLib, and then build it only for you (or contribue it to the community if it can useful).
Or you could simply pull it as a dependency from Maven and just overwrite the classes you are modifying, Maven should then use your own class instead of the one in the original Spark package.

How to enforce a strict Maven dependency policy (dependency chain attack)

I would like to enforce a strict Maven dependency policy which goes beyond the basic checksumPolicy=fail approach.
This is an attempt to provide protection against a modified release dependency which still has a valid digest value also known as a "dependency chain attack".
This situation could arise from the following scenarios:
the dependency has been updated, but the author has not updated the version number and managed to overwrite the earlier release (or their repo account has been compromised)
a man-in-the-middle attack is in place (with on-the-fly rewriting/hashing)
the repository itself has been compromised
In discussions with other developers one approach to combat the above is to have a list of known MD5/SHA digests in the pom.xml and have Maven verify that the downloaded dependency has the same digest. This ensures that so long as the source code repository remains secure, any included dependencies that have been compromised will be detected.
Thus my question is twofold:
are there any alternative approaches that would work more efficiently?
are there any existing implementations/plugins that do this job?
If anyone is wrestling with this issue themselves I've created a Maven Enforcer Plugin Rule that deals with it. You can specify a list of artifact URNs that include the expected SHA1 hash value and have the enforcer verify that this is indeed what is being used in the build.
It is available through Maven Central under MIT license, with source code in GitHub here: https://github.com/gary-rowe/BitcoinjEnforcerRules
While the project indicates that it is for the Bitcoinj library, it is actually a general purpose solution which could be included in any security conscious build process. It will also scan your existing project and identify any problem area while it automatically builds the whitelist for you.
Below is an example of the configuration you'd require in your project to use it.
<build>
<plugins>
...
<!-- Use the Enforcer to verify build integrity -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<id>enforce</id>
<phase>verify</phase>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<digestRule implementation="uk.co.froot.maven.enforcer.DigestRule">
<!-- Create a snapshot to build the list of URNs below -->
<buildSnapshot>true</buildSnapshot>
<!-- List of required hashes -->
<!-- Format is URN of groupId:artifactId:version:type:classifier:scope:hash -->
<!-- classifier is "null" if not present -->
<urns>
<urn>antlr:antlr:2.7.7:jar:null:compile:83cd2cd674a217ade95a4bb83a8a14f351f48bd0</urn>
<urn>dom4j:dom4j:1.6.1:jar:null:compile:5d3ccc056b6f056dbf0dddfdf43894b9065a8f94</urn>
<urn>org.bouncycastle:bcprov-jdk15:1.46:jar:null:compile:d726ceb2dcc711ef066cc639c12d856128ea1ef1</urn>
<urn>org.hibernate.common:hibernate-commons-annotations:4.0.1.Final:jar:null:compile:78bcf608d997d0529be2f4f781fdc89e801c9e88</urn>
<urn>org.hibernate.javax.persistence:hibernate-jpa-2.0-api:1.0.1.Final:jar:null:compile:3306a165afa81938fc3d8a0948e891de9f6b192b</urn>
<urn>org.hibernate:hibernate-core:4.1.8.Final:jar:null:compile:82b420eaf9f34f94ed5295454b068e62a9a58320</urn>
<urn>org.hibernate:hibernate-entitymanager:4.1.8.Final:jar:null:compile:70a29cc959862b975647f9a03145274afb15fc3a</urn>
<urn>org.javassist:javassist:3.15.0-GA:jar:null:compile:79907309ca4bb4e5e51d4086cc4179b2611358d7</urn>
<urn>org.jboss.logging:jboss-logging:3.1.0.GA:jar:null:compile:c71f2856e7b60efe485db39b37a31811e6c84365</urn>
<urn>org.jboss.spec.javax.transaction:jboss-transaction-api_1.1_spec:1.0.0.Final:jar:null:compile:2ab6236535e085d86f37fd97ddfdd35c88c1a419</urn>
<!-- A check for the rules themselves -->
<urn>uk.co.froot.maven.enforcer:digest-enforcer-rules:0.0.1:jar:null:runtime:16a9e04f3fe4bb143c42782d07d5faf65b32106f</urn>
</urns>
</digestRule>
</rules>
</configuration>
</execution>
</executions>
<!-- Ensure we download the enforcer rules -->
<dependencies>
<dependency>
<groupId>uk.co.froot.maven.enforcer</groupId>
<artifactId>digest-enforcer-rules</artifactId>
<version>0.0.1</version>
</dependency>
</dependencies>
</plugin>
...
</plugins>
</build>
Sounds like a good job for the repository itself. Check out this other thread regarding a similar question.
I've no familiarity with the PGP signing scenario in Nexus, but does that sound like a good start?

Resources