Cypress pipe console.log and command log to output - e2e-testing

Is it possible to redirect or capture Cypress browser log and command log to output?
I read some Cypress github issues on this topic. But I don't know how to make it work.
Basically, I want to capture all the Cypress GUI command logs in the headless non-GUI mode. If I can include browser console log will be even better. The purpose is to understand what happened when a test fails.
I use teamcity as ci. Here is an example of my build log. I want to see all the command log here too. Actually, any console.log run on the server side using cy.task is displayed in the build log. Running cy.task('log',message) is too manual. Any smarter ways?
[09:49:08][Step 1/1] 2 of 4: new actions (52s)
[09:50:00][Step 1/1] 3 of 4: new actions (52s)
[09:50:53][Step 1/1] 4 of 4: new actions (53s)
[09:51:47][Step 1/1] (Results)
[09:51:47][Step 1/1]
[09:51:47][Step 1/1] ┌─────────────────────────────────────┐
[09:51:47][Step 1/1] │ Tests: 8 │
[09:51:47][Step 1/1] │ Passing: 8 │
[09:51:47][Step 1/1] │ Failing: 0 │
[09:51:47][Step 1/1] │ Pending: 0 │
[09:51:47][Step 1/1] │ Skipped: 0 │
[09:51:47][Step 1/1] │ Screenshots: 0 │
[09:51:47][Step 1/1] │ Video: true │
[09:51:47][Step 1/1] │ Duration: 3 minutes, 38 seconds │
[09:51:47][Step 1/1] │ Estimated: 1 minute, 8 seconds │
[09:51:47][Step 1/1] │ Spec Ran: action/action_spec.js │
[09:51:47][Step 1/1] └─────────────────────────────────────┘

As of Cypress 3.0.0, you can use cy.task() to access node directly and output to the node console. From the docs:
// in test
cy.task('log', 'This will be output to the terminal')
// in plugins file
on('task', {
log (message) {
console.log(message)
return null
}
})
See here for more info.
I don't know of a way to mirror the Cypress logs to the console directly, but this is at least a workable alternative.

Setting the ELECTRON_ENABLE_LOGGING environment variable to 1 will cause all Chrome internal logging to be printed to the console.
ELECTRON_ENABLE_LOGGING=1 npx cypress run
ELECTRON_ENABLE_LOGGING
Prints Chrome's internal logging to the console.
With this enabled, in addition to capturing any existing logging, this will also allow you to manually log within a test using console.log:
console.log('Response JSON: ' + json)

FYI:
Cypress community is going to provide native support so that we don't have to do any workarounds to print the logs on non-GUI(headless) CLI.
Ongoing issue: https://github.com/cypress-io/cypress/issues/448 includes reference to 3 existing workarounds https://github.com/cypress-io/cypress/issues/448#issuecomment-613236352

Expanding on #Joshua-wade's answer, you can overwrite cy.log to redirect all calls to it to the log task. Just as the following:
Cypress.Commands.overwrite('log', (subject, message) => cy.task('log', message));
Note: there's a small drawback to this: when you run the test using the Test Runner, instead of seeing LOG my message in the command log, you'll see TASK log, my message. But IMHO it's negligible.

I agree with Araon's approach using overwrite on the log function. Another approach, if you want to keep cy.log the default behavior, would be create a custom command. Doc here
Example:
Cypress.Commands.add("printLog", (message) => { cy.task("log", {message}); })
This way you can call function printLog vs cy.task("log", {message});

Related

Upload local file to azure static web storage container $web using azcopy and terraform local-exec provisioner

I have been struggling with uploading a bunch of css/html/js files to a static website hosted on a storage container $web using terraform. It fails even with a single index.html throwing below error.
Error: local-exec provisioner error
│
│ with null_resource.frontend_files,
│ on c08-02-website-storage-account.tf line 111, in resource "null_resource" "frontend_files":
│ 111: provisioner "local-exec" {
│
│ Error running command '
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://***********.blob.core.windows.net/web?sv=2018-11-09&sr=c&st=2022-01-01T00%3A00%3A00Z&se=2023-01-01T00%3A00%3A00Z&sp=racwl&spr=https&sig=*******************" --recursive
': exit status 1. Output: INFO: Scanning...
│ INFO: Any empty folders will not be processed, because source and/or
│ destination doesn't have full folder support
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 has started
│ Log file is located at:
│ /home/runner/.azcopy/718f9960-b7eb-7843-648a-6b57d14f5e27.log
│
│
100.0 %, 0 Done, 0 Failed, 0 Pending, 0 Skipped, 0 Total,
│
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 summary
│ Elapsed Time (Minutes): 0.0336
│ Number of File Transfers: 1
│ Number of Folder Property Transfers: 0
│ Total Number of Transfers: 1
│ Number of Transfers Completed: 0
│ Number of Transfers Failed: 1
│ Number of Transfers Skipped: 0
│ TotalBytesTransferred: 0
│ Final Job Status: Failed
│
The $web container is empty. So I placed a dummy index.html file before I executed the code to see if that would make this "empty folder" error go away. But still no luck.
I gave the complete set of permissions to SAS key to rule out any access issue.
I suspect the azcopy commmand is unable to navigate to the source folder and get the contents to be uploaded. I am not sure though.
Excerpts from tf file:
resource "null_resource" "frontend_files"{
depends_on = [data.azurerm_storage_account_blob_container_sas.website_blob_container_sas,
azurerm_storage_account.resume_static_storage]
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://${azurerm_storage_account.resume_static_storage.name}.blob.core.windows.net/web${data.azurerm_storage_account_blob_container_sas.website_blob_container_sas.sas}" --recursive
EOT
}
}
Any help would be appreciated.
Per a solution listed here, we need to add an escape character (\) before $web. Following command (to copy all files and subfolders to web container) worked for me:
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/\$web/?<SAS token>" --recursive
Without the escape character, it was failing with error: "failed to perform copy command due to error: cannot transfer individual files/folders to the root of a service. Add a container or directory to the destination URL"
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/$web/?<SAS token>" --recursive

Terraform Output doesn't exist after running terraform apply

I configure my terraform using a GCS backend, with a workspace. My CI environment exactly has access to the state file it requires for the workspace.
terraform {
required_version = ">= 0.14"
backend "gcs" {
prefix = "<my prefix>"
bucket = "<my bucket>"
credentials = "credentials.json"
}
}
I define the output of my terraform module inside output.tf:
output "base_api_url" {
description = "Base url for the deployed cloud run service"
value = google_cloud_run_service.api.status[0].url
}
My CI Server runs terraform apply -auto-approve -lock-timeout 15m. It succeeds and it shows me the output in the console logs:
Outputs:
base_api_url = "https://<my project url>.run.app"
When I call terraform output base_api_url and it gives me the following error:
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs
│ are empty. Please define an output in your configuration with the `output`
│ keyword and run `terraform refresh` for it to become available. If you are
│ using interpolation, please verify the interpolated value is not empty. You
│ can use the `terraform console` command to assist.
I try calling terraform refresh like it mentions in the warning and it tells me:
╷
│ Warning: Empty or non-existent state
│
│ There are currently no remote objects tracked in the state, so there is
│ nothing to refresh.
╵
I'm not sure what to do. I'm calling terraform output RIGHT after I call apply, but it's still giving me no outputs. What am I doing wrong?
I had the exact same issue, and this was happening because I was running terraform commands from a different path than the one I was at.
terraform -chdir="another/path" apply
And then running the output command would fail with that error. Unless you cd to that path before running the output command.
cd "another/path"
terraform output

sls offline cloudside command gives error webpack plugin could not find the configuration file ... webpack.config.js

I'm working on a project that uses AWS Lambda and serverless (sls) technology.
When trying to run npm start which runs sls offline cloudside I get the error:
The webpack plugin could not find the configuration file at: c:\Play\MyProj\src\myproj-backend\services\user-service\d:\Play\MyProj\src\myproj-backend\services\user-service\node_modules\serverless-bundle\src\webpack.config.js
The full output is:
npm start
> users-service#1.0.0 start c:\Play\MyProj\src\myproj-backend\services\user-service
> sls offline cloudside
Serverless: Deprecation warning: Starting with version 3.0.0, following property will be replaced:
"provider.iamRoleStatements" -> "provider.iam.role.statements"
More Info: https://www.serverless.com/framework/docs/deprecations/#PROVIDER_IAM_SETTINGS
Serverless: Deprecation warning: Resolution of lambda version hashes was improved with better algorithm, which will be used in next major release.
Switch to it now by setting "provider.lambdaHashingVersion" to "20201221"
More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
Serverless: Loading cloudside resources for 'users-service-dev' stack.
Serverless Error ----------------------------------------
The webpack plugin could not find the configuration file at: c:\Play\MyProj\src\myproj-backend\services\user-service\d:\Play\MyProj\src\myproj-backend\services\user-service\node_modules\serverless-bundle\src\webpack.config.js
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 12.16.2
Framework Version: 2.30.2
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.3
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! users-service#1.0.0 start: `sls offline cloudside`
npm ERR! Exit status 1
But what is strange is that the same git project works on another development box I use.
Any idea's why this isn't working?
The reason for this is so bizarre and likely to be related to the way nodejs resolves files on windows when SYMLINKD directories are involved.
No one else is likely to have this error but they might so I'm posting my answer.
My host has two disks: c: and d:. For consistency with my other machines, I do all my work on c:\play so that it's simpler for me. On the host that had the problem I had created a SYMLINKD from c:\play to d:\play. Below is the output of the dir command to show what I'm saying:
c:\>dir
04/12/2020 07:59 PM <SYMLINKD> Play [d:\Play]
So when I go into the c:\Play directory it is really going into the d:\Play directory, so the following two commands put me into the exact same directory:
cd /d c:\play\myproj\myproj-backend-services\user-service
cd /d d:\play\myproj\myproj-backend-services\user-service
When I ran sls offline cloudside the webpack.config.js file could not be found. The build was looking for the file:
c:\Play\MyProj\src\myproj-backend\services\user-service\d:\Play\MyProj\src\myproj-backend\services\user-service\node_modules\serverless-bundle\src\webpack.config.js
Upon closer inspection, I realized that the filename was corrupt and something was sticking two directories together. Namely these two strings were being stuck together to make the webpack.config.js file.
c:\Play\MyProj\src\myproj-backend\services\user-service\
d:\Play\MyProj\src\myproj-backend\services\user-service\node_modules\serverless-bundle\src\webpack.config.js
The webpack.config.js file did exist, and when I changed to the D: drive everything worked as expected. The error output line that alerted me was:
The webpack plugin could not find the configuration file at: c:\Play\MyProj\src\myproj-backend\services\user-service\d:\Play\MyProj\src\myproj-backend\services\user-service\node_modules\serverless-bundle\src\webpack.config.js
So if anyone else in Windows has used SYMLINKD (See mklink /d), then you too might run into this problem.
When working correctly (running from the D: drive) the output was:
d:\Play\MyProj\src\myproj-backend\services\user-service>npm start
> users-service#1.0.0 start d:\Play\MyProj\src\myproj-backend\services\user-service
> sls offline cloudside
Serverless: Deprecation warning: Starting with version 3.0.0, following property will be replaced:
"provider.iamRoleStatements" -> "provider.iam.role.statements"
More Info: https://www.serverless.com/framework/docs/deprecations/#PROVIDER_IAM_SETTINGS
Serverless: Deprecation warning: Resolution of lambda version hashes was improved with better algorithm, which will be used in next major release.
Switch to it now by setting "provider.lambdaHashingVersion" to "20201221"
More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
Serverless: Loading cloudside resources for 'users-service-dev' stack.
Serverless: Bundling with Webpack...
Serverless: Watching for changes...
offline: Starting Offline: dev/us-east-1.
offline: Offline [http for lambda] listening on http://localhost:3002
offline: Function names exposed for local invocation by aws-sdk:
* createUser: users-service-dev-createUser
* getUser: users-service-dev-getUser
* signup: users-service-dev-signup
* login: users-service-dev-login
┌──────────────────────────────────────────────────────────────────────────────┐
│ │
│ POST | http://localhost:4321/dev/user/create │
│ POST | http://localhost:4321/2015-03-31/functions/createUser/invocations │
│ GET | http://localhost:4321/dev/user │
│ POST | http://localhost:4321/2015-03-31/functions/getUser/invocations │
│ POST | http://localhost:4321/dev/user/signup │
│ POST | http://localhost:4321/2015-03-31/functions/signup/invocations │
│ POST | http://localhost:4321/dev/user/login │
│ POST | http://localhost:4321/2015-03-31/functions/login/invocations │
│ │
└──────────────────────────────────────────────────────────────────────────────┘
offline: [HTTP] server ready: http://localhost:4321 �
offline:
offline: Enter "rp" to replay the last request

Get Gitlab CI coverage with flow coverage report

I'm using flow-coverage-report to get the coverage rate of my code by Flow. I've added a job in my Gitlab CI pipeline to execute it and retrieve the coverage rate.
jobName:
stage: stage
script:
- ./node_modules/.bin/flow-coverage-report
coverage: /MyProject\s*│\s*([\d\.]+)/
The output of the script is a lot of lines and more particularly :
┌───────────┬─────────┬───────┬─────────┬───────────┐
│ project │ percent │ total │ covered │ uncovered │
│ MyProject │ 87 % │ 62525 │ 54996 │ 7529 │
└───────────┴─────────┴───────┴─────────┴───────────┘
They are not using the pipe character | for the table, but │
When I debug the regex with Rubular as explained in the GitLab Documentation, I get the right result in the matching group.
However, every time my job finishes, it does not have any coverage value. Am I missing something ? Are the characters displayed differently ?
Note : I have no problems with Jest coverage for example.
Alright, after digging in the code and other places, I've found the culprit => colors in the output.
The first line of the table above was actually displayed in green !
So to have the correct value interpreted by the GitLab regex, one can include the colors in the regex or just strip the colors like I did :
./node_modules/.bin/flow-coverage-report | sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g"
Thanks to this answer.
Hope it helps.

Pm2 logs with huge size using pm2-logrotate

I'm having troubles with pm2.
I'm using a module called pm2-logrotate but the logs have a huge gize like 1.7G and don't respect my configuration which is
== pm2-logrotate ==
┌────────────────┬───────────────┐
│ key │ value │
├────────────────┼───────────────┤
│ compress │ true │
│ rotateInterval │ * * */1 * * * │
│ max_size │ 10M │
│ retain │ 1 │
│ rotateModule │ true │
│ workerInterval │ 30 │
└────────────────┴───────────────┘
So what can I do to pm2 can delete the old logs and dont start crushing my machine with a huge amount of data?
I had this problem too. I think there's currently a bug in pm2-logrotate where the workerInterval option is ignored, and it only rotates according to the rotateInterval option (i.e. once per day by default). And that means that the files can get much bigger than the size you specified with the max_size option. See options here.
I "solved" it by setting the rotateInterval option to every 30 mins instead of the default of once per day. Here's the command:
pm2 set pm2-logrotate:rotateInterval '*/30 * * * *'
The problem with this is that it means your logs will rotate every 30 mins no matter what size they are. Another temporary solution would be to run pm2 flush (which deletes all logs) with crontab. First run crontab -e in your terminal, and then add this line to the file:
*/30 * * * * pm2 flush
You can also flush a specific app with pm2 flush your_app_name if you've got a particular app that produces a lot of logs. If you're not good at remembering how cron timing syntax works (like me), you can use this site.

Resources