Cypress CLI console output not very readable - node.js

I'm running cypress tests headlessly and would like the console output to be a little more readable. Currently, I get a very messy output as seen below. According to the documentation it should be using the Mocha SPEC reporter layout. Can anyone tell me what I need to do to make this output readable?
I'm running ./node_modules/.bin/cypress run
Started video recording: ←[36mC:\code\website\ui\cypress\videos\vf7hm.mp4←[39m
←[90m (←[4m←[1mTests Starting←[22m←[24m)←[39m
←[0m←[0m
←[0m My First Test←[0m
←[32m ΓêÜ←[0m←[90m Gets, types and asserts←[0m←[31m (18965ms)←[0m
←[92m ←[0m←[32m 1 passing←[0m←[90m (21s)←[0m
←[32m (←[4m←[1mTests Finished←[22m←[24m)←[39m
←[37m - Tests: ←[39m←[32m1←[39m
←[37m - Passes: ←[39m←[32m1←[39m
←[37m - Failures: ←[39m←[32m0←[39m
←[37m - Pending: ←[39m←[32m0←[39m
←[37m - Duration: ←[39m←[32m20 seconds←[39m
←[37m - Screenshots: ←[39m←[32m0←[39m
←[37m - Video Recorded: ←[39m←[32mtrue←[39m
←[37m - Cypress Version: ←[39m←[32m1.4.2←[39m
←[36m (←[4m←[1mVideo←[22m←[24m)←[39m
- Started processing: ←[36mCompressing to 32 CRF←[39m
- Finished processing: ←[36mC:\code\website\ui\cypress\videos\vf7hm.mp4←[39m ←
[90m(1 second)←[39m
←[90m (←[4m←[1mAll Done←[22m←[24m)←[39m

The messy output is because Cypress is using ANSI color escape characters to format the output, which your log viewer/console doesn't understand. You can disable the output of ANSI color control characters by setting the environment variable NO_COLOR:
NO_COLOR=1 cypress run
See https://docs.cypress.io/guides/continuous-integration/introduction#Colors
This was added in Cypress 3.0.0, released on 5/29/2018.

Could be two issues:
Cypress is using ANSI colors, Jenkins isn't configured to convert this.
To fix: Install a plugin like this: https://plugins.jenkins.io/ansicolor/
Encoding may not be UTF-8 (although it looks like yours is, others may not be)
To fix:
Navigate: Manage Jenkins => Configure System => Global Properties
Add env variable:
JAVA_TOOL_OPTIONS
-Dfile.encoding=UTF-8

I was getting same issue and also I was not able to add ANSI colors plugin to my Jenkins so I just added NO_COLOR=1 before test case run command like follows:
NO_COLOR=1 npx cypress run
Adding this code to my command solved my issue which is a simple way and you don't even need to add any other plugin as well.

From my knowledge, this is an issue specifically in Windows output in Cypress here: https://github.com/cypress-io/cypress/issues/1143

This worked for me as well in jenkins CI
NO_COLOR=1 cypress run

Related

How to run Cypress tests on circle-ci orb using Cypress-tags

I am trying to run a small test collection with Cypress using the cucumber plugin and the official Circle-ci Orb.
I've been going through the documentation and I've got them running without issues locally. The script I'm using to run them locally is this one:
"test:usa": "cypress-tags run --headless -b chrome -e TAGS='#usa'"
*Note the cypress-tags command and the TAGS option.
For the CI I use the official Circle-ci Orb and have a configuration like this:
- cypress/run:
name: e2e tests
executor: with-chrome
record: true
parallel: true
parallelism: 2
tags: 'on-pr'
group: 2x-chrome
ci-build-id: '${CIRCLE_BUILD_NUM}'
requires:
- org/deployment
As you can read, I want to raise 2 machines where I divide my feature files, setting the tag to 'on-pr' and grouping the run under '2x-chrome' using as well the ci-build-id.
The thing is that the official Orb uses the cypress run command which does not filter scenarios by their tags, so it is of no use here. My options were:
Using the command parameter in the orb to call the required script as I do locally:
command: npm run test:usa
My problem with this option is that the parallel configuration does not work as expected so I discarded it.
I tried to pass the TAGS parameter as an env var within the Circle-ci executor to see if the Orb was able to see it, but it was of no use as the Orb does not use cypress-tags run but cypress run.
environment:
TAGS: '#usa'
At this point my question is, is there a workaround to this (using cypress-tags in the Circle-ci orb) or shall I opt for a different way of testing this?
Thanks in advance

Azure pipeline - ERROR:gpu_process_transport_factory.cc(1029) Lost UI shared context

I have a pipeline on Azure that runs on a Windows 10 virtual machine that at some point calls a test task for an assembly (.dll) that tests functions for a Revit (3D modelling software) plugin.
In order to run the tests, the pipeline is simply running a command line task that starts RevitTestFramework, an open source application (https://github.com/DynamoDS/RevitTestFramework) used for this kind of testing.
Here are the relevant parts of my pipeline's yaml:
trigger:
- develop
pool: 'Default'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Debug'
steps:
# Some steps here
- task: CmdLine#2
inputs:
script: |
cd %ALLUSERSPROFILE%\RevitTestFramework\bin\AnyCPU\Debug
RevitTestFrameworkConsole.exe --dir %ALLUSERSPROFILE%\RevitTestFramework\bin\AnyCPU\Debug -a %ALLUSERSPROFILE%\RevitTestFramework\Tests\ModelEstablishment.IntegrationTests\bin\Debug\ModelEstablishment.IntegrationTests.dll -r %ALLUSERSPROFILE%\RevitTestFramework\Tests\results.xml -revit:"C:\Program Files\Autodesk\Revit 2020\Revit.exe" --continuous
Where %ALLUSERSPROFILE% is C:\ProgramData, but I also tried different folders (including C:) with the same result.
The very last line is the one that causes the issue. If it is a bit confusing, it is just summoning the program RevitTestFrameworkConsole.exe, which lives under the directory --dir and it has to test the assembly -a, spit out the results at -r, using the version of Revit specified at the path after -revit.
If I run this with my command line in Windows (not through the Azure pipeline) it runs perfectly:
But if it is Azure running it then it starts idling, repeating these lines until it cancels itself:
DevTools listening on ws://127.0.0.1:8088/devtools/browser/fa35cb10-8f4d-468f-9b0e-6457845ff8b2
Running C:\RevitTestFramework\bin\AnyCPU\Debug\RTF_Batch_Test.txt
[1202/180047.796:ERROR:gpu_process_transport_factory.cc(1029)] Lost UI shared context.
I've done my research but all I can find is that that error shouldn't be an actual error that breaks things and it usually happens when testing headless Chrome (which I'm far from doing).
Does anyone know what's going on here and how do I fix this?
UPDATE
By comparing the processes of manually running the command through the command line in the VM and what happens when Azure runs the same command, I've noticed that right after the line that says Running C:\ProgramData\RevitTestFramework\bin\AnyCPU\Debug\RTF_Batch_Test.txt (see screenshots) Revit is supposed to startup and run the tests. So I'm thinking that Azure's pipeline runs that command differently from what I do when I run the command on the same VM's command line.
Maybe this can help understanding the issue

How do i run 'tagged' scenarios with Cucumber tags in WebdriverIO

Hi _ if anyone can help here- I am trying to run a specific scenario by using Cucumber tags- this is the expression i am using to run the tests built with Webdriver- Cucumber framework-
npx wdio run wdio.conf.js --cucumberOpts.tagExpression='#Tag
When I use the above, nothing happens - i have defined the tag - '#Tag' at the feature level- so am expecting that all the scenarios within the feature file will get executed, however when i run the above command- nothing happens. Can someone please help?
If you want to run only specific tests you can mark your features with tags.
These tags will be placed before each feature like so:
#sanity
Feature: checking for sanity test
Scenario:
Given
When:
Then:
To run only the tests with specific tag(s) use the --cucumberOpts.tagExpression= parameter
like:
wdio -- --cucumberOpts.tagExpression='#sanity'
wdio -- --cucumberOpts.tagExpression='#sanity or #AnotherTag'
Could not exactly predict issue you facing with the info you provided. I would suggest to have a look on the cucumber tag documentation and webderiverio testsuite implementation documentation.
cucumber
webdriverio
I hope it helps, Happy Learning !!
I got it working having in package.json
"scripts": { "test": "wdio ./wdio.conf.js --cucumberOpts.tagExpression" }
and running the test using
npm run test "#Tag"
use as below
"test-imagesave": "wdio run 'config/wdio.conf.ts' --cucumberOpts.tagExpression '#tag'",

Gitlab CI variables returns empty string?

It's been 2 days since one of my project' build starts failing on Gitlab CI. The main error was E_MISSING_APP_KEY and when I check another variable just by echoing $HOST and $PORT from my .gitlab-ci.yml config, like this
tests:
script:
- echo "${HOST} ${PORT}"
- node -e "console.log(process.env.HOST, process.env.PORT)"
- node_modules/.bin/nyc node ace test -t 0
I got nothing.
The build was failed because it can't read my environment variable that I set on its CI Settings.
Anyone experiencing same issue? & how to solve this?
Update:
I'm trying to create new project with only containing .gitlab-ci.yml file here and it's seems working just fine
But why the world it's still failing on my main project?
For anyone else having a similar problem:
check your variable, if it is protected your branch has to be protected as well or remove the protected option on your variable
The issue is solved by delete all of my variables I've had & set them back from the CI Setting. And the build pipeline is running without any errors. (except the actual testing is still failed, lol)
Honestly, I'm still wondering why this could happened? and hopefully no one will experiencing same kind of issue like me here..

How can I run mocha tests remotely on IntelliJ IDEA 13 (or WebStorm)?

IntelliJ IDEA 13 has really excellent support for Mocha tests through the Node.js plugin: https://www.jetbrains.com/idea/webhelp/running-mocha-unit-tests.html
The problem is, while I edit code on my local machine, I have a VM (vagrant) in which I run and test the code, so it's as production-like as possible.
I wrote a small bash script to run my tests remotely on this VM whenever I invoke "Run" from within IntelliJ, and the results pop up in the console well enough, however I'd love to use the excellent interface that appears whenever the Mocha test runner is invoked.
Any ideas?
Update: There's a much better way to do this now. See https://github.com/TechnologyAdvice/fake-mocha
Success!!
Here's how I did it. This is specific to connecting back to vagrant, but can be tweaked for any remote server to which you have key-based SSH privileges.
Somewhere on your remote machine, or even within your codebase, store the NodeJS plugin's mocha reporter (6 .js files at the time of this writing). These are found in NodeJS/js/mocha under your main IntelliJ config folder, which on OSX is ~/Library/Application Support/IntelliJIdea13. Know the absolute path to where you put them.
Edit your 'Run Configurations'
Add a new one using 'Mocha'
Set 'Node interpreter' to the full path to your ssh executable. On my machine, it's /usr/bin/ssh.
Set the 'Node options' to this behemoth, tweaking as necessary for your own configuration:
-i /Users/USERNAME/.vagrant.d/insecure_private_key vagrant#MACHINE_IP "cd /vagrant; node_modules/mocha/bin/_mocha --recursive --timeout 2000 --ui bdd --reporter /vagrant/tools/mocha_intellij/mochaIntellijReporter.js test" #
REMEMBER! The # at the end is IMPORTANT, as it will cancel out everything else the Mocha run config adds to this command. Also, remember to use an absolute path everywhere that I have one.
Set 'Working directory', 'Mocha package', and 'Test directory' to exactly what they should be if you were running mocha tests locally. These will not impact the test execution, but this interface WILL check to make sure these are valid paths.
Name it, save, and run!
Fully integrated, remote testing bliss.
1) In Webstorm, create a "Remote Debug" configuration, using port 5858.
2) Make sure that port is open on your server or VM.
3) On the remote server, execute Mocha with the --debug-brk option: mocha test --debug-brk
4) Back in Webstorm, start the remote-debug you created in Step 1, and and execution should pause on set breakpoints.

Resources