Get Gitlab CI coverage with flow coverage report - gitlab

I'm using flow-coverage-report to get the coverage rate of my code by Flow. I've added a job in my Gitlab CI pipeline to execute it and retrieve the coverage rate.
jobName:
stage: stage
script:
- ./node_modules/.bin/flow-coverage-report
coverage: /MyProject\s*│\s*([\d\.]+)/
The output of the script is a lot of lines and more particularly :
┌───────────┬─────────┬───────┬─────────┬───────────┐
│ project │ percent │ total │ covered │ uncovered │
│ MyProject │ 87 % │ 62525 │ 54996 │ 7529 │
└───────────┴─────────┴───────┴─────────┴───────────┘
They are not using the pipe character | for the table, but │
When I debug the regex with Rubular as explained in the GitLab Documentation, I get the right result in the matching group.
However, every time my job finishes, it does not have any coverage value. Am I missing something ? Are the characters displayed differently ?
Note : I have no problems with Jest coverage for example.

Alright, after digging in the code and other places, I've found the culprit => colors in the output.
The first line of the table above was actually displayed in green !
So to have the correct value interpreted by the GitLab regex, one can include the colors in the regex or just strip the colors like I did :
./node_modules/.bin/flow-coverage-report | sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g"
Thanks to this answer.
Hope it helps.

Related

Terraform map(object) variable in BitBucket Pipelines

I am deploying Terraform code through a Bitbucket pipeline and am having an problem when parsing a map of objects variable in the pipeline. Below is the variable:
variable "images" {
type = map(object({
port = number
}))
}
Below is how the variable value is defined in the BitBucket Pipeline variables:
"{"image_one"={port=1000}"image_two"={port=2000}}"
When the pipeline runs, I am getting the following error:
Error: Extra characters after expression
│
│ on <value for var.images> line 1:
│ (source code not available)
│
│ An expression was successfully parsed, but extra characters were found
│ after it.
Below is the command within bitbucket-pipelines.yml on how the variable is called within the pipeline:
terraform apply -var images="$IMAGES" -auto-approve
Any advice on how to get the map of objects to execute through the pipeline would be helpful.
Thank you.
Assuming that this is your actual code, and not again some mistake you did when making the question, then you are missing a comma between arguments. It should be:
"{"image_one"={port=1000}, "image_two"={port=2000}}"

Terraform external data error calling aws cli

trying to run a aws cli call as a terraform external data but getting a error for unmarshal string
Terraform:
data "external" "cognito" {
program = ["sh", "-c", "aws cognito-idp list-user-pool-clients --user-pool-id eu-west-xxx | jq '.UserPoolClients | .[] | select(.ClientName | contains(\"AmazonOpenSearchService\")) | .ClientId'"]
}
running the command directly from bash returns:
"123456"
Via terraform errors with:
│ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: https://www.terraform.io/internals/debugging
│
│ Program: /bin/sh
│ Result Error: json: │ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: https://www.terraform.io/internals/debugging
│
│ Program: /bin/sh
│ Result Error: json: cannot unmarshal string into Go value of type map[string]string
When running an ad hoc script using the Terraform external data source you need to return a valid json object (The docs also mention a map of string) which "123456" is not. You'd need something such as '{"clientId":"123456"}'.
If you want to avoid the script completely, what Mark B suggested in his comment above is the best way to go about it.

Upload local file to azure static web storage container $web using azcopy and terraform local-exec provisioner

I have been struggling with uploading a bunch of css/html/js files to a static website hosted on a storage container $web using terraform. It fails even with a single index.html throwing below error.
Error: local-exec provisioner error
│
│ with null_resource.frontend_files,
│ on c08-02-website-storage-account.tf line 111, in resource "null_resource" "frontend_files":
│ 111: provisioner "local-exec" {
│
│ Error running command '
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://***********.blob.core.windows.net/web?sv=2018-11-09&sr=c&st=2022-01-01T00%3A00%3A00Z&se=2023-01-01T00%3A00%3A00Z&sp=racwl&spr=https&sig=*******************" --recursive
': exit status 1. Output: INFO: Scanning...
│ INFO: Any empty folders will not be processed, because source and/or
│ destination doesn't have full folder support
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 has started
│ Log file is located at:
│ /home/runner/.azcopy/718f9960-b7eb-7843-648a-6b57d14f5e27.log
│
│
100.0 %, 0 Done, 0 Failed, 0 Pending, 0 Skipped, 0 Total,
│
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 summary
│ Elapsed Time (Minutes): 0.0336
│ Number of File Transfers: 1
│ Number of Folder Property Transfers: 0
│ Total Number of Transfers: 1
│ Number of Transfers Completed: 0
│ Number of Transfers Failed: 1
│ Number of Transfers Skipped: 0
│ TotalBytesTransferred: 0
│ Final Job Status: Failed
│
The $web container is empty. So I placed a dummy index.html file before I executed the code to see if that would make this "empty folder" error go away. But still no luck.
I gave the complete set of permissions to SAS key to rule out any access issue.
I suspect the azcopy commmand is unable to navigate to the source folder and get the contents to be uploaded. I am not sure though.
Excerpts from tf file:
resource "null_resource" "frontend_files"{
depends_on = [data.azurerm_storage_account_blob_container_sas.website_blob_container_sas,
azurerm_storage_account.resume_static_storage]
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://${azurerm_storage_account.resume_static_storage.name}.blob.core.windows.net/web${data.azurerm_storage_account_blob_container_sas.website_blob_container_sas.sas}" --recursive
EOT
}
}
Any help would be appreciated.
Per a solution listed here, we need to add an escape character (\) before $web. Following command (to copy all files and subfolders to web container) worked for me:
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/\$web/?<SAS token>" --recursive
Without the escape character, it was failing with error: "failed to perform copy command due to error: cannot transfer individual files/folders to the root of a service. Add a container or directory to the destination URL"
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/$web/?<SAS token>" --recursive

Using variables in terraform backend "s3"

I have written below backend configuration in terraform:
terraform {
backend "s3" {
bucket = "${var.application_name}"
region = "${var.AWS_REGION}"
key = "tf-scripts/${var.application_name}-tfstate"
}
}
while running terraform init, I am getting below error message:
terraform init
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.tf line 4, in terraform:
│ 4: bucket = "${var.application_name}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 5, in terraform:
│ 5: region = "${var.AWS_REGION}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 6, in terraform:
│ 6: key = "tf-scripts/${var.application_name}-tfstate"
│
│ Variables may not be used here.
Can anyone assist me on achieving the same?
If you want to pass variables you could do something like this:
echo "yes" | terraform init -backend-config="${backend_env}" -backend-config="key=global/state/${backend_config}/terraform.tfstate" -backend-config="region=us-east-1" -backend-config="encrypt=true" -backend-config="kms_key_id=${kmykeyid}" -backend-config="dynamodb_table=iha-test-platform-db-${backend_config}"
Now the trick here is that when you initialize it has to be done at the command line level. Terraform can not do this as mentioned already by other community members, it's just the way it is. that said you can modify the commands for initializing and pass it through as environment variables on your host or pull in variables from another source.
in this example, I declared the variables using a container through AWS Codebuild but you can use any method as long as the variable is defined prior to initialization. Let me know if you need help with this, the documentation isn't very clear and the folks on StackOverflow have been amazing at addressing this but for beginners, it's been hard to understand how this all comes together.

Pm2 logs with huge size using pm2-logrotate

I'm having troubles with pm2.
I'm using a module called pm2-logrotate but the logs have a huge gize like 1.7G and don't respect my configuration which is
== pm2-logrotate ==
┌────────────────┬───────────────┐
│ key │ value │
├────────────────┼───────────────┤
│ compress │ true │
│ rotateInterval │ * * */1 * * * │
│ max_size │ 10M │
│ retain │ 1 │
│ rotateModule │ true │
│ workerInterval │ 30 │
└────────────────┴───────────────┘
So what can I do to pm2 can delete the old logs and dont start crushing my machine with a huge amount of data?
I had this problem too. I think there's currently a bug in pm2-logrotate where the workerInterval option is ignored, and it only rotates according to the rotateInterval option (i.e. once per day by default). And that means that the files can get much bigger than the size you specified with the max_size option. See options here.
I "solved" it by setting the rotateInterval option to every 30 mins instead of the default of once per day. Here's the command:
pm2 set pm2-logrotate:rotateInterval '*/30 * * * *'
The problem with this is that it means your logs will rotate every 30 mins no matter what size they are. Another temporary solution would be to run pm2 flush (which deletes all logs) with crontab. First run crontab -e in your terminal, and then add this line to the file:
*/30 * * * * pm2 flush
You can also flush a specific app with pm2 flush your_app_name if you've got a particular app that produces a lot of logs. If you're not good at remembering how cron timing syntax works (like me), you can use this site.

Resources