I'm setting up a test framework for my project. After configuring pytest and coverage, it shows 100% coverage for all the python files, but there are no tests yet. I'm guessing it is counting the source code also as tests and giving 100% coverage for all the scripts.
Apologies, I cannot really post the image from my work account. Please let me know if there is anything wrong with my configuration, or the way I'm running it.
Project structure:
etl-orchestrator
- etl_api
- com.etl.api
- com.etl.tests
- etl_core
- com.etl.core
- com.etl.tests
- etl_services
- com.etl.services
- com.etl.tests
- .coveragerc
- pytest.ini
- setup.py
.coveragerc
[run]
source = .
omit =
*/__init__.py
*tests*
*virtualenvs*
.venv/*
*build/*
pytest.ini
[pytest]
python_files = tests/*.py
addopts = --cov-config=.coveragerc --cov=etl_api --cov=etl_core --cov=etl_services
command to run:
cd <project root directory>
pytest
Related
I am trying to install library spark-xml_2.12-0.15.0 using dbx.
The documentation I found is to include it on the conf/deployment.yml file like:
custom:
basic-cluster-props: &basic-cluster-props
spark_version: "10.4.x-cpu-ml-scala2.12"
basic-static-cluster: &basic-static-cluster
new_cluster:
<<: *basic-cluster-props
num_workers: 2
build:
commands:
- "mvn clean package" #
environments:
default:
workflows:
- name: "charming-aurora-sample-jvm"
libraries:
- jar: "{{ 'file://' + dbx.get_last_modified_file('target/scala-2.12', 'jar') }}" #
tasks:
- task_key: "main"
<<: *basic-static-cluster
deployment_config: #
no_package: true
spark_jar_task:
main_class_name: "org.some.main.ClassName"
You may see documentation page here: https://dbx.readthedocs.io/en/latest/guides/jvm/jvm_devops/?h=maven
I have installed the library on the cluster using Maven file (https://mvnrepository.com/artifact/com.databricks/spark-xml_2.13/0.15.0):
<!-- https://mvnrepository.com/artifact/com.databricks/spark-xml -->
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-xml_2.13</artifactId>
<version>0.15.0</version>
</dependency>
I can use it on a notebook level but not from a job deployed using dbx.
Edit
I am using PySpark .
So, I included it as this at conf/deployment.yml:
libraries:
- maven: "com.databricks:spark-xml_2.12:0.15.0"
On the file conf/deployment.yml
- name: "my-job"
libraries:
- maven:
- coordinates:"com.databricks:spark-xml_2.12:0.15.0"
tasks:
- task_key: "first_task"
<<: *basic-static-cluster
python_wheel_task:
package_name: "project_name"
entry_point: "jl" # take a look at the setup.py entry_points section for details on how to define an entrypoint
parameters: ["--conf-file", "file:fuse://conf/tasks/my_job_config.yml"]
Then I go with
dbx deploy my-job
This throwing the following error:
HTTPError: 400 Client Error: Bad Request for url: https://adb-xxxx.azuredatabricks.net/api/2.0/jobs/reset
Response from server:
{ 'error_code': 'MALFORMED_REQUEST',
'message': "Could not parse request object: Expected 'START_OBJECT' not "
"'START_ARRAY'\n"
' at [Source: (ByteArrayInputStream); line: 1, column: 91]\n'
' at [Source: java.io.ByteArrayInputStream#37fda06f; line: 1, '
'column: 91]'}
You were pretty close, and the error you've run into doesn't really say much.
We plan to introduce structure verification to make such that checks are more understandable.
The correct deployment file structure should look as follows:
- name: "my-job"
tasks:
- task_key: "first_task"
<<: *basic-static-cluster
# please note that libraries section is on the task level
libraries:
- maven:
coordinates:"com.databricks:spark-xml_2.12:0.15.0"
python_wheel_task:
package_name: "project_name"
entry_point: "jl" # take a look at the setup.py entry_points section for details on how to define an entrypoint
parameters: ["--conf-file", "file:fuse://conf/tasks/my_job_config.yml"]
Two important points here:
libraries section is on the task level
maven section expects an object, not a list, therefore this will not work:
#THIS IS INCORRECT DON'T DO THIS
libraries:
- maven:
- coordinates:"com.databricks:spark-xml_2.12:0.15.0"
But this will:
# correct structure
libraries:
- maven:
coordinates:"com.databricks:spark-xml_2.12:0.15.0"
I've summarized these detail in this new documentation section.
The documentation says following:
The workflows section of the deployment file fully follows the Databricks Jobs API structures.
If you look into API documentation, you will see that you need to use maven instead of file, and provide Maven coordinate as a string. Something like this (please note that you need to use Scala 2.12, not 2.13):
libraries:
- maven:
coordinates: "com.databricks:spark-xml_2.12:0.15.0"
I know there are a lot of similar questions out there, but none of them has a proper answer. I am trying to deploy my code using GitLab cicd pipeline. While executing the deployment stage, my pipeline failed and got this error.
My serverless.yml has this code related to exclude
package:
patterns:
- '!nltk'
- '!node_modules/**'
- '!package-lock.json'
- '!package.json'
- '!__pycache__/**'
- '!.gitlab-ci.yml'
- '!tests/**'
- '!README.md'
The error I am getting is
Serverless Error ----------------------------------------
No file matches include / exclude patterns
I forgot to mention, I have a nltk layer which I am deploying in the same serverless.yml as my lambda function and other resources.
I am not sure what has to be done exactly to get rid of the error. Any help would be appreciated. thank you.
Your directives do not define any inclusive patterns. Perhaps you want to list the files & directories you need packaged. Each directive builds on the next.
Something like:
package:
patterns:
- "**/**"
- '!nltk'
- '!node_modules/**'
- '!package-lock.json'
- '!package.json'
- '!__pycache__/**'
- '!.gitlab-ci.yml'
- '!tests/**'
- '!README.md'
See https://www.serverless.com/framework/docs/providers/aws/guide/packaging/#patterns
I writing automation tests with Cucumber/Selenide and I want to rerun failed scenarios.
This is part of my project with only two small tests (one is failing) to demonstrate behavior: https://github.com/mtpx/cucumberRerun
I read how to do it on How to rerun the failed scenarios using Cucumber? and https://medium.com/#belek.bagishbekov/how-to-rerun-failed-test-cases-in-cucumber-b7fe9b1dcf9c
In my application.feature test runner(ApplicationTest) in #CucumberOptions's plugins section I have line: "rerun:rerun/failed_scenarios.txt", according to previous urls it should generate text file with failed scenario, but after test execution with 'mvn clean test' (with failed scenarios) - there's no any rerun.txt file.
Do You know what is wrong here? Why after build i dont have rerun.txt file?
I using Selenide instead of Selenium, maybe problem is here?
Create another scenario file as shown below. Let's say this as FailedScenarios.java. Whenever you notice any failed scenario run this file. This file will use target/rerun.txt as an input for running the scenarios.
This line is require:
features = "#target/rerun.txt",
Full CucumberOptions
#CucumberOptions(
monochrome = true,
features = "#target/rerun.txt", //Cucumber picks the failed scenarios from this file
format = {"pretty", "html:target/site/cucumber-pretty",
"json:target/cucumber.json"}
)
public class FailedScenarios {
}
You can use rerun file path other than target if you need to run failed Scenario also trigger from maven , In that case change the path in both file you main runner and failed test runner
problem solved :)
In pom i had line:
-Dcucumber.options="--plugin io.qameta.allure.cucumberjvm.AllureCucumberJvm"
This line overrides all plugins information in TestRunner
I have already:
Downloaded the Cucumber Java, Gherkin plugin
I already have the steps and features directories:
My directory structure looks like this:
- test
- java
- features
- featureSet1
- oneFeature.feature
- anotherFeature.feature
- featuresSet2
- twoFeature.feature
- CucumberTests.java
- steps
- step1.java
- step2.java
Under the features folder, I have a file called, CucumberTests.java. I'm able to run the tests via mvn test but the red error marks reallllly annoy me.
I have these tags in CucumberTest.java, which is supposed to run the tests:
#RunWith(Cucumber.class)
#CucumberOptions(plugin = { "pretty", "html:target/surefire-
reports/cucumber", "json:target/surefire-
reports/cucumberOriginal.json"},
features = {"src/test/java/features/featuresSet1",
"src/test/java/features/featuresSet2",
},
tags = {"~#ignore"},
glue = {"steps"})
The issue is from Substeps IntelliJ Plugin that IntelliJ suggests you install when it locates a .feature file inside your project.
Just ignore this extension when it pops up or uninstall if you already have it.
Cucumber for Java and Gherkin should be enough.
I'm running SonarQube 5.2 with SonarRunner v2.4 (MSBuild) and am having issues getting SonarRunner to pick up the code coverage reports. I have VS Test which drops the TestResults folder within the source directory. The TRX file is within the TestResults folder. Is there a default directory that Sonar-Runner scans to search for the TRX/code coverage reports? The build succeeds but there are no unit test coverage results in SonarQube for the app.
TRX: F:\Builds\50\IT\ABCDemo.Nightly\src\TestResults\ *.TRX
Source:
F:\Builds\50\IT\ABCDemo.Nightly\src
Process:
SonarRunner begin //with key,name,version filled in
MSBuild executes
VS Test executes and drops TestResults folder within src directory.
SonarRunner end
Issue:
MSBuild SonarQube Runner Post-processor 1.0.2.0
09:22:57.327 Fetching code coverage report information from TFS...
09:23:17.723 No code coverage reports were found for the current build.
EDIT: I've included the .TRX and .coveragexml files using /d: but I still get the issue saying no code coverage reports were found.
I can see in the logs that it does:
09:51:59.875 INFO - Parsing the Visual Studio coverage XML report f:\Builds\50\IT\ABCDemo.Nightly\src\VisualStudio.coveragexml
09:53:07.737 INFO - Parsing the Visual Studio Test Results file f:\Builds\50\IT\ABCDemo.Nightly\src\TestResults\tfsbuildagent_QL1CIBUILD3 2015-12-03 09_51_33.trx
These occur near the end of sonar analysis.
Resolved by:
Included the paths to the files for trx and coveragexmls within the arguments for sonar-runner.
MSBuild.SonarQube.Runner.exe begin
/k:projectkey /n:projectname /v:projectversion
/d:sonar.cs.vstest.reportsPaths="path\to\ *.trx"
/d:sonar.cs.vscoveragexml.reportsPaths="path\to\VisualStudio.coveragexml"
Moved the /'s around for better visual.