Pm2 logs with huge size using pm2-logrotate - node.js

I'm having troubles with pm2.
I'm using a module called pm2-logrotate but the logs have a huge gize like 1.7G and don't respect my configuration which is
== pm2-logrotate ==
┌────────────────┬───────────────┐
│ key │ value │
├────────────────┼───────────────┤
│ compress │ true │
│ rotateInterval │ * * */1 * * * │
│ max_size │ 10M │
│ retain │ 1 │
│ rotateModule │ true │
│ workerInterval │ 30 │
└────────────────┴───────────────┘
So what can I do to pm2 can delete the old logs and dont start crushing my machine with a huge amount of data?

I had this problem too. I think there's currently a bug in pm2-logrotate where the workerInterval option is ignored, and it only rotates according to the rotateInterval option (i.e. once per day by default). And that means that the files can get much bigger than the size you specified with the max_size option. See options here.
I "solved" it by setting the rotateInterval option to every 30 mins instead of the default of once per day. Here's the command:
pm2 set pm2-logrotate:rotateInterval '*/30 * * * *'
The problem with this is that it means your logs will rotate every 30 mins no matter what size they are. Another temporary solution would be to run pm2 flush (which deletes all logs) with crontab. First run crontab -e in your terminal, and then add this line to the file:
*/30 * * * * pm2 flush
You can also flush a specific app with pm2 flush your_app_name if you've got a particular app that produces a lot of logs. If you're not good at remembering how cron timing syntax works (like me), you can use this site.

Related

Upload local file to azure static web storage container $web using azcopy and terraform local-exec provisioner

I have been struggling with uploading a bunch of css/html/js files to a static website hosted on a storage container $web using terraform. It fails even with a single index.html throwing below error.
Error: local-exec provisioner error
│
│ with null_resource.frontend_files,
│ on c08-02-website-storage-account.tf line 111, in resource "null_resource" "frontend_files":
│ 111: provisioner "local-exec" {
│
│ Error running command '
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://***********.blob.core.windows.net/web?sv=2018-11-09&sr=c&st=2022-01-01T00%3A00%3A00Z&se=2023-01-01T00%3A00%3A00Z&sp=racwl&spr=https&sig=*******************" --recursive
': exit status 1. Output: INFO: Scanning...
│ INFO: Any empty folders will not be processed, because source and/or
│ destination doesn't have full folder support
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 has started
│ Log file is located at:
│ /home/runner/.azcopy/718f9960-b7eb-7843-648a-6b57d14f5e27.log
│
│
100.0 %, 0 Done, 0 Failed, 0 Pending, 0 Skipped, 0 Total,
│
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 summary
│ Elapsed Time (Minutes): 0.0336
│ Number of File Transfers: 1
│ Number of Folder Property Transfers: 0
│ Total Number of Transfers: 1
│ Number of Transfers Completed: 0
│ Number of Transfers Failed: 1
│ Number of Transfers Skipped: 0
│ TotalBytesTransferred: 0
│ Final Job Status: Failed
│
The $web container is empty. So I placed a dummy index.html file before I executed the code to see if that would make this "empty folder" error go away. But still no luck.
I gave the complete set of permissions to SAS key to rule out any access issue.
I suspect the azcopy commmand is unable to navigate to the source folder and get the contents to be uploaded. I am not sure though.
Excerpts from tf file:
resource "null_resource" "frontend_files"{
depends_on = [data.azurerm_storage_account_blob_container_sas.website_blob_container_sas,
azurerm_storage_account.resume_static_storage]
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://${azurerm_storage_account.resume_static_storage.name}.blob.core.windows.net/web${data.azurerm_storage_account_blob_container_sas.website_blob_container_sas.sas}" --recursive
EOT
}
}
Any help would be appreciated.
Per a solution listed here, we need to add an escape character (\) before $web. Following command (to copy all files and subfolders to web container) worked for me:
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/\$web/?<SAS token>" --recursive
Without the escape character, it was failing with error: "failed to perform copy command due to error: cannot transfer individual files/folders to the root of a service. Add a container or directory to the destination URL"
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/$web/?<SAS token>" --recursive

how to prevent resource creation in terraform when refering output of one project wiith common resources in another project

In one solution have 2 projects. The common-infra project is for creating ecs cluster and common ecs services like nginx used by all other services. ecs-service1 project contains resource definition for creating ecs services. I do reference resource ARNs created in common-infra project in my ecs-service1 project.
I first go to common-infra and do terraforma plan and create. Now the cluster and nginx service is up and running. Next I go to ecs-service1 and then to terraform plan. At this point it recoganizes the fact that I have linked to a module common-infra and shows that it will create the cluster and common service like nginx again.
Is there a way to arrange/reference the project in such a way that when I run terrafrom plan ecs-service1 it know that common-infra is already built and it knows the state and it only creates only the resoruces in ecs-services1 and only pulling in the ARNs reference created in common-infra?
.
├── ecs-service1
│ ├── main.tf
│ ├── task-def
│ │ ├── adt-api-staging2-task-definition.json
│ │ └── adt-frontend-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── terraform.tfstate.backup
│ └── variables.tf
├── common-infra
│ ├── main.tf
│ ├── task-def
│ │ └── my-nginx-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── user-data.sh
│ └── variables.tf
└── script
└── get-taskdefinitions.sh
common-infra main.tf
output "splat_lb_listener_http_80_arn"{
value = aws_lb_listener.http_80.arn
}
output "splat_lb_listener_http_8080_arn"{
value = aws_lb_listener.http_8080.arn
}
output "splat_ecs_cluster_arn" {
value = aws_ecs_cluster.ecs_cluster.arn
}
ecs-service1 main.tf
module "splat_common" {
source = "../common-infa"
}
resource "aws_ecs_service" "frontend_webapp_service" {
name = var.frontend_services["service_name"]
cluster = module.splat_common.splat_ecs_cluster_arn
...
}
There are a few solutions, but first I'd like to say that your ecs-service should be calling common-infra as a module only - so that all of the resource creation is handled at once (and not split apart as you describe).
Another solution would be to use terraform import to get the current state into your existing terraform. This is less than ideal, because now you have the same infrastructure being managed by 2 state files.
If you are including the common-infra because it provides some output, you should look into using data lookups (https://www.terraform.io/docs/language/data-sources/index.html). You can even reference output of other terraform state (https://www.terraform.io/docs/language/state/remote-state-data.html) (although I've never actually tried this, it can be done).

Get Gitlab CI coverage with flow coverage report

I'm using flow-coverage-report to get the coverage rate of my code by Flow. I've added a job in my Gitlab CI pipeline to execute it and retrieve the coverage rate.
jobName:
stage: stage
script:
- ./node_modules/.bin/flow-coverage-report
coverage: /MyProject\s*│\s*([\d\.]+)/
The output of the script is a lot of lines and more particularly :
┌───────────┬─────────┬───────┬─────────┬───────────┐
│ project │ percent │ total │ covered │ uncovered │
│ MyProject │ 87 % │ 62525 │ 54996 │ 7529 │
└───────────┴─────────┴───────┴─────────┴───────────┘
They are not using the pipe character | for the table, but │
When I debug the regex with Rubular as explained in the GitLab Documentation, I get the right result in the matching group.
However, every time my job finishes, it does not have any coverage value. Am I missing something ? Are the characters displayed differently ?
Note : I have no problems with Jest coverage for example.
Alright, after digging in the code and other places, I've found the culprit => colors in the output.
The first line of the table above was actually displayed in green !
So to have the correct value interpreted by the GitLab regex, one can include the colors in the regex or just strip the colors like I did :
./node_modules/.bin/flow-coverage-report | sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g"
Thanks to this answer.
Hope it helps.

Cypress pipe console.log and command log to output

Is it possible to redirect or capture Cypress browser log and command log to output?
I read some Cypress github issues on this topic. But I don't know how to make it work.
Basically, I want to capture all the Cypress GUI command logs in the headless non-GUI mode. If I can include browser console log will be even better. The purpose is to understand what happened when a test fails.
I use teamcity as ci. Here is an example of my build log. I want to see all the command log here too. Actually, any console.log run on the server side using cy.task is displayed in the build log. Running cy.task('log',message) is too manual. Any smarter ways?
[09:49:08][Step 1/1] 2 of 4: new actions (52s)
[09:50:00][Step 1/1] 3 of 4: new actions (52s)
[09:50:53][Step 1/1] 4 of 4: new actions (53s)
[09:51:47][Step 1/1] (Results)
[09:51:47][Step 1/1]
[09:51:47][Step 1/1] ┌─────────────────────────────────────┐
[09:51:47][Step 1/1] │ Tests: 8 │
[09:51:47][Step 1/1] │ Passing: 8 │
[09:51:47][Step 1/1] │ Failing: 0 │
[09:51:47][Step 1/1] │ Pending: 0 │
[09:51:47][Step 1/1] │ Skipped: 0 │
[09:51:47][Step 1/1] │ Screenshots: 0 │
[09:51:47][Step 1/1] │ Video: true │
[09:51:47][Step 1/1] │ Duration: 3 minutes, 38 seconds │
[09:51:47][Step 1/1] │ Estimated: 1 minute, 8 seconds │
[09:51:47][Step 1/1] │ Spec Ran: action/action_spec.js │
[09:51:47][Step 1/1] └─────────────────────────────────────┘
As of Cypress 3.0.0, you can use cy.task() to access node directly and output to the node console. From the docs:
// in test
cy.task('log', 'This will be output to the terminal')
// in plugins file
on('task', {
log (message) {
console.log(message)
return null
}
})
See here for more info.
I don't know of a way to mirror the Cypress logs to the console directly, but this is at least a workable alternative.
Setting the ELECTRON_ENABLE_LOGGING environment variable to 1 will cause all Chrome internal logging to be printed to the console.
ELECTRON_ENABLE_LOGGING=1 npx cypress run
ELECTRON_ENABLE_LOGGING
Prints Chrome's internal logging to the console.
With this enabled, in addition to capturing any existing logging, this will also allow you to manually log within a test using console.log:
console.log('Response JSON: ' + json)
FYI:
Cypress community is going to provide native support so that we don't have to do any workarounds to print the logs on non-GUI(headless) CLI.
Ongoing issue: https://github.com/cypress-io/cypress/issues/448 includes reference to 3 existing workarounds https://github.com/cypress-io/cypress/issues/448#issuecomment-613236352
Expanding on #Joshua-wade's answer, you can overwrite cy.log to redirect all calls to it to the log task. Just as the following:
Cypress.Commands.overwrite('log', (subject, message) => cy.task('log', message));
Note: there's a small drawback to this: when you run the test using the Test Runner, instead of seeing LOG my message in the command log, you'll see TASK log, my message. But IMHO it's negligible.
I agree with Araon's approach using overwrite on the log function. Another approach, if you want to keep cy.log the default behavior, would be create a custom command. Doc here
Example:
Cypress.Commands.add("printLog", (message) => { cy.task("log", {message}); })
This way you can call function printLog vs cy.task("log", {message});

What is the differences between GLI and CLI in linux from the kernel's perspective [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Can anyone tell me what is the differences between the GLI and CLI from a developer's perspective?
I also want to know how the linux system set up GLI and CLI after booting.
For CLI, when a new user login, after system booting, the process init dose a fork, in turn an exec of program getty is invoked for user login. After user entered username and password, the getty verifies the identity of current login user. If everything is OK, getty executes execle to load login program followed by invoking a shell.
But what does kernel do, when booting a graphic desktop.
Thanks a lot.
It's not about the kernel at all. It's about how init is configured and which of its runlevels is started. The command pstree -u is your friend.
├─mdm───mdm─┬─Xorg
│ ├─x-session-manag(szg)─┬─applet.py───{applet.py}
│ │ ├─gpg-agent
│ │ ├─marco───2*[{marco}]
│ │ ├─mate-bluetooth-───2*[{mate-bluetooth-}]
│ │ ├─mate-panel───2*[{mate-panel}]
│ │ ├─mate-power-mana───2*[{mate-power-mana}]
│ │ ├─mate-screensave───2*[{mate-screensave}]
│ │ ├─mate-settings-d───4*[{mate-settings-d}]
│ │ ├─mate-volume-con───{mate-volume-con}
│ │ ├─mintupdate-laun───sh───mintUpdate───2*[{mintUpdate}]
│ │ ├─nm-applet───2*[{nm-applet}]
│ │ ├─notgmail───sleep
│ │ ├─polkit-mate-aut───{polkit-mate-aut}
│ │ ├─sh───caja───3*[{caja}]
│ │ ├─ssh-agent
│ │ ├─tapeta───sleep
│ │ ├─zeitgeist-datah───3*[{zeitgeist-datah}]
│ │ └─3*[{x-session-manag}]
│ └─{mdm}
You can see a MATE desktop session above. init starts mdm, the MATE Desktop Manager as root, which in turn starts the Xorg X-server to handle the hardware and a session-manager where you log in, then it drops privileges and starts your user-level desktop services.
This does not happen instead of the gettys, but besides them. You can still log in on the CLI, with ctrl-alt-F1 etc.

Resources