Why not use EXIM an OpenDKIM service? - linux

I tried to configure EXIM + OpenDKIM in CentOS 7...
(everything a latest version from repositories)
I used this description to configure a system: https://www.rosehosting.com/blog/how-to-install-and-configure-dkim-with-opendkim-and-exim-on-a-centos-7-vps/ , butI didnt use a default selector, i tried to use unique.
The outgoing mail haven't DKIM signature, I use this config in EXIM:
remote_smtp:
driver = smtp
DKIM_DOMAIN = $sender_address_domain
DKIM_SELECTOR = 20170915exim
DKIM_PRIVATE_KEY = ${if exists{/etc/opendkim/keys/$sender_address_domain/20170915exim}{/etc/opendkim/keys/$sender_address_domain/20170915exim}{0}}
DKIM_CANON = relaxed
DKIM_STRICT = 0
with this, /etc/opendkim:
.
├── keys
│ └── valami.com
│ ├── 20170915exim
│ └── 20170915exim.txt
├── KeyTable
├── SigningTable
└── TrustedHosts
But when I send a mail (with mail, or by telnet, or any others), the EXIM dont use an OpenDKIM. Of course the opendkim listening on port:
tcp 0 0 127.0.0.1:8891 0.0.0.0:* LISTEN 6663/opendkim
When I send a mail fromlocal host to outside:
2017-09-15 15:53:20 1dsr3M-0005fK-Ul <= root#valami.com H=localhost [127.0.0.1] P=smtp S=341
2017-09-15 15:53:21 1dsr3M-0005fK-Ul => xxx#gmail.com R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [74.125.133.26] X=TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128 CV=yes K C="250 2.0.0 OK o1si854413wrg.487 - gsmtp"
2017-09-15 15:53:21 1dsr3M-0005fK-Ul Completed
Why dont call an Exim daemon an OpenDKIM interface?
Thanks your help!

I SOLVED!
I have to add a 'dkim_sign_headers' variable to configuration file...
remote_smtp:
driver = smtp
dkim_domain = $sender_address_domain
dkim_selector = 20170915exim
dkim_private_key = ${if exists{/etc/opendkim/keys/$dkim_domain/$dkim_selector}{/etc/opendkim/keys/$dkim_domain/$dkim_selector}{0}}
dkim_canon = relaxed
dkim_strict = 0
dkim_sign_headers = subject:to:from

Related

How to add a label to my vm instance in gcp via terraform/terragrunt

I have an issue in our environment where i cannot add a label to a vm instance in GCP via terraform/terragrunt after creation. We have a google repository that is setup via terraform and we use git to clone and update from a local repository, this will activate a trigger on cloudbuild to push the changes to the repo. We do not use terraform/grunt commands at all. It is all controlled via git. The labels are referenced in our compute module as shown.
variable "labels" {
description = "Labels to add."
type = map(string)
default = {}
}
Ok onto the issue. We have in our environment a mix of lift and shift and native cloud vm instances. We recently decided we wanted to add an additional label in the code to identify if the instance was under terraform control - ie terraform = "true/false"
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
So i add the label and use the usual git commands to add/commit push etc which triggers the cloudbuild as usual. The problem is, the label does not appear in the console when viewing it.
It's as if cloudbuild or terraform/terragrunt isn't recognising it as a change. I can change the value of a label no problem, but i cannot seem to add or remove a label after the vm has been created.
It has been suggested to run terraform/terragrunt plan in vs code but as mentioned, this has all been setup to use git so the above commands do not work.
For example i run terragrunt init in the directory and get this error
PS C:\Cloudrepos\placesforpeople> terragrunt init
time=2022-07-27T09:56:27+01:00 level=error msg=Error reading file at path C:/Cloudrepos/placesforpeople/terragrunt.hcl: open C:/Cloudrepos/placesforpeople/terragrunt.hcl: The system cannot find the
file specified.
time=2022-07-27T09:56:27+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople> cd org
PS C:\Cloudrepos\placesforpeople\org> cd rnd
PS C:\Cloudrepos\placesforpeople\org\rnd> cd adam_play_area
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> ls
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 20/07/2022 14:18 modules
d----- 20/07/2022 14:18 test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> cd test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001> cd compute
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> ls
Directory: C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 07/07/2022 15:51 start_stop_schedule
d----- 20/07/2022 14:18 umig
-a---- 07/07/2022 16:09 1308 .terraform.lock.hcl
-a---- 27/07/2022 09:56 2267 terragrunt.hcl
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> terragrunt init
Initializing modules...
- data_disk in ..\compute_data_disk
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/google from the dependency lock file
- Reusing previous version of hashicorp/google-beta from the dependency lock file
╷
│ Warning: Backend configuration ignored
│
│ on ..\compute_data_disk\backend.tf line 3, in terraform:
│ 3: backend "gcs" {}
│
│ Any selected backend applies to the entire configuration, so Terraform
│ expects provider configurations only in the root module.
│
│ This is a warning rather than an error because it's sometimes convenient to
│ temporarily call a root module as a child module for testing purposes, but
│ this backend configuration block will have no effect.
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google: could not connect to registry.terraform.io: Failed to
│ request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google-beta: could not connect to registry.terraform.io: Failed
│ to request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
time=2022-07-27T09:57:40+01:00 level=error msg=Hit multiple errors:
Hit multiple errors:
exit status 1
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute>
But as mentioned, we dont use and have never used these commands to push the changes.
I cannot work out why these labels wont add/remove after the vm has already been created.
I have tried making a change to an instance to trigger the change such as increase the disk size.
I have tried to create a block in the module for all the labels needed but this doesn't work as you cannot have labels as a block in this module.
labels {
application = var.labels.application
businessunit = var.labels.businessunit
costcentre = var.labels.costcentre
createdby = var.labels.createdby
department = var.labels.department
disasterrecovery = var.labels.disasterrecovery
environment = var.labels.environment
contact = var.labels.contact
terraform = var.labels.terraform
}
}
Any ideas? I know you cannot add a label to a project post creation, does the same apply to vm instances? Is there any alternative method i can test?
As requested this is the code for the vm instance
terraform {
source = "../../modules//compute_instance_static_ip/"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders("org.hcl")
}
dependency "project" {
config_path = "../project"
# Configure mock outputs for the terraform commands that are returned when there are no outputs available (e.g the
# module hasn't been applied yet.
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
mock_outputs = {
project_id = "project-not-created-yet"
}
}
prevent_destroy = false
inputs = {
gcp_instance_sa_email = "testprj-compute#gc-r-prj-testprj-0001-9627.iam.gserviceaccount.com" # This well tell gcp to use the default GCE service account
instance_name = "rnd-demo-test1"
network = "projects/gc-a-prj-vpchost-0001-3312/global/networks/gc-r-vpc-0001"
subnetwork = "projects/gc-a-prj-vpchost-0001-3312/regions/europe-west2/subnetworks/gc-r-snet-middleware-0001"
zone = "europe-west2-c"
region = "europe-west2"
project = dependency.project.outputs.project_id
os_image = "debian-10-buster-v20220118"
machine_type = "n1-standard-4"
boot_disk_size = 100
instance_scope = ["cloud-platform"]
instance_tags = ["demo-test"]
deletion_protection = "false"
metadata = {
windows-startup-script-ps1 = "Set-TimeZone -Id 'GMT Standard Time' -PassThru"
}
ip_address_region = "europe-west2"
ip_address_type = "INTERNAL"
attached_disks = {
data = {
size = 60
type = "pd-standard"
}
}
/*/ instance_schedule_policy = {
name = "start-stop"
#region = "europe-west2"
vm_start_schedule = "30 07 * * *"
vm_stop_schedule = "00 18 * * *"
time_zone = "GMT"
}
*/
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
terragrunt validate-inputs result below
PS C:\Cloudrepos\placesforpeople\org\rnd> terragrunt validate-inputs
time=2022-07-27T14:25:19+01:00 level=warning msg=The following inputs passed in by terragrunt are unused:
prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - billing_account prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - host_project_id prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=info msg=All required inputs are passed in by terragrunt. prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=error msg=Terragrunt configuration has misaligned inputs
time=2022-07-27T14:25:19+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople\org\rnd>
I have found the culprit!
In the compute instance module i discovered this block of code. I removed labels and voila the extra labels now appear. Thanks for the assistance and advice on post formatting.
lifecycle {
ignore_changes = [
boot_disk.0.initialize_params.0.image,
attached_disk, labels
]
}

Nodejs app running on localhost:3000 is not accessible on Linux server

I want to deploy my web app on cloud server, the OS is Centos 7, I got a static IP address like "34.80.XXX.XX", if I set my web app running on port 80, and I can see the page when I enter "34.80.XXX.XX:80".
But I if I running on port 3000, and enter "34.80.XXX.XX:3000", it not working.
I have tried to stop my firewall:
# systemctl stop firewalld
And use the command below to check there is no other program running on port 3000
# netstat -tnlp
Here is the code in app.js
const Koa = require('koa')
const json = require('koa-json')
const app = new Koa()
const PORT = 3000
// make JSON Prettier middleware
app.use(json())
// Simple middleware example
app.use(async ctx => {
ctx.body = {msg: 'Hello World'}
})
app.listen(PORT)
console.log(`server run at http://localhost:${PORT}`)
I use pm2 to run it in background
[root#instance-1 01-hello]# pm2 start app.js -n hello
[PM2] Starting /home/xing/01-hello/app.js in fork_mode (1 instance)
[PM2] Done.
┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ hello │ default │ N/A │ fork │ 3835 │ 0s │ 0 │ online │ 0% │ 12.6mb │ root │ disabled │
└─────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
And I enter the command netstat -tnlp to check app.js is running on port 3000
[root#instance-1 01-hello]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 15958/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1184/master
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 2421/mongod
tcp6 0 0 :::22 :::* LISTEN 15958/sshd
tcp6 0 0 :::3000 :::* LISTEN 3835/node /home/xin
tcp6 0 0 ::1:25 :::* LISTEN 1184/master
I have stuck on this point for long time, is there any good solution

Request are not distributed across their worker process

I was just experimenting worker process hence try this:
const http = require("http");
const cluster = require("cluster");
const CPUs = require("os").cpus();
const numCPUs = CPUs.length;
if (cluster.isMaster) {
console.log("This is the master process: ", process.pid);
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on("exit", (worker) => {
console.log(`worker process ${process.pid} has died`);
console.log(`Only ${Object.keys(cluster.workers).length} remaining...`);
});
} else {
http
.createServer((req, res) => {
res.end(`process: ${process.pid}`);
if (req.url === "/kill") {
process.exit();
}
console.log(`serving from ${process.pid}`);
})
.listen(3000);
}
I use loadtest to check "Are Request distributed across their worker process?" But I got same process.pid
This is the master process: 6984
serving from 13108
serving from 13108
serving from 13108
serving from 13108
serving from 13108
...
Even when I kill one of them, I get the same process.pid
worker process 6984 has died
Only 3 remaining...
serving from 5636
worker process 6984 has died
Only 2 remaining...
worker process 6984 has died
Only 1 remaining...
How I am getting same process.pid when I killed that? And Why my requests are not distributed across their worker process?
Even when I use pm2 to test cluster mood using:
$ pm2 start app.js -i 3
[PM2] Starting app.js in cluster_mode (3 instances)
[PM2] Done.
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ app │ cluster │ 0 │ online │ 0% │ 31.9mb │
│ 1 │ app │ cluster │ 0 │ online │ 0% │ 31.8mb │
│ 2 │ app │ cluster │ 0 │ online │ 0% │ 31.8mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
for loadtest -n 50000 http://localhost:3000 I check pm2 monit:
$ pm2 monit
┌─ Process List ───────────────────────────────────────────────────┐┌── app Logs ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│[ 0] app Mem: 43 MB CPU: 34 % online ││ │
│[ 1] app Mem: 28 MB CPU: 0 % online ││ │
│[ 2] app Mem: 27 MB CPU: 0 % online ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
└──────────────────────────────────────────────────────────────────┘└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
┌─ Custom Metrics ─────────────────────────────────────────────────┐┌─ Metadata ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Heap Size 20.81 MiB ││ App Name app │
│ Heap Usage 45.62 % ││ Namespace default │
│ Used Heap Size 9.49 MiB ││ Version N/A │
│ Active requests 0 ││ Restarts 0 │
└──────────────────────────────────────────────────────────────────┘└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
left/right: switch boards | up/down/mouse: scroll | Ctrl-C: exit To go further check out https://pm2.io/
But surprisingly, app1 and app2 never hit any request as well as it didn't show any app log.
Update 1
I still couldn't figure out any solution. If any further query need please ask for that. I faced that issue first time. That's why maybe I was unable to represent where the exact problem occurring.
Update 2
After getting some answer I try to test it again with a simple node server:
Using pm2 without any config:
Using config suggested from #Naor Tedgi's answer:
Now the server is not running at all.
I got this
Probably it is OS related,I am on Ubuntu 20.04.
you don't have cluster mode enabled if you want to use pm2 as load balancer you need to add
exec_mode cluster
add this config file name it config.js
module.exports = {
apps : [{
script : "app.js",
instances : "max",
exec_mode : "cluster"
}]
}
and run pm2 start config.js
then all CPU usage will divide equally
tasted on
os macOS Catalina 10.15.7
node v14.15.4
Not sure why, but it seems that for whatever reason cluster doesn't behave on your machine the way it should.
In lieu of using node.js for balancing you can go for nginx then. On nginx part it's fairly easy if one of the available strategies is enough for you: http://nginx.org/en/docs/http/load_balancing.html
Then you need to make sure that your node processes are assigned different ports. In pm2 you can either use https://pm2.keymetrics.io/docs/usage/environment/ to either manually increment port based on the instance id or delegate it fully to pm2.
Needless to say, you'll have to send your requests to nginx in this case.

The package import path is different for dynamic codegen and static codegen

Here is the structure for src directory of my project:
.
├── config.ts
├── protos
│ ├── index.proto
│ ├── index.ts
│ ├── share
│ │ ├── topic.proto
│ │ ├── topic_pb.d.ts
│ │ ├── user.proto
│ │ └── user_pb.d.ts
│ ├── topic
│ │ ├── service.proto
│ │ ├── service_grpc_pb.d.ts
│ │ ├── service_pb.d.ts
│ │ ├── topic.integration.test.ts
│ │ ├── topic.proto
│ │ ├── topicServiceImpl.ts
│ │ ├── topicServiceImplDynamic.ts
│ │ └── topic_pb.d.ts
│ └── user
│ ├── service.proto
│ ├── service_grpc_pb.d.ts
│ ├── service_pb.d.ts
│ ├── user.proto
│ ├── userServiceImpl.ts
│ └── user_pb.d.ts
└── server.ts
share/user.proto:
syntax = "proto3";
package share;
message UserBase {
string loginname = 1;
string avatar_url = 2;
}
topic/topic.proto:
syntax = "proto3";
package topic;
import "share/user.proto";
enum Tab {
share = 0;
ask = 1;
good = 2;
job = 3;
}
message Topic {
string id = 1;
string author_id = 2;
Tab tab = 3;
string title = 4;
string content = 5;
share.UserBase author = 6;
bool good = 7;
bool top = 8;
int32 reply_count = 9;
int32 visit_count = 10;
string create_at = 11;
string last_reply_at = 12;
}
As you can see, I try to import share package and use UserBase message type in Topic message type. When I try to start the server, got error:
no such Type or Enum 'share.UserBase' in Type .topic.Topic
But when I changed the package import path to a relative path import "../share/user.proto";. It works fine and got server logs: Server is listening on http://localhost:3000.
Above is the usage of dynamic codegen.
Now, I switch to using static codegen, here is the shell script for generating the codes:
protoc \
--plugin=protoc-gen-ts=./node_modules/.bin/protoc-gen-ts \
--ts_out=./src/protos \
-I ./src/protos \
./src/protos/**/*.proto
It seems protocol buffer compiler doesn't support relative path, got error:
../share/user.proto: Backslashes, consecutive slashes, ".", or ".." are not allowed in the virtual path
And, I changed the the package import path back to import "share/user.proto";. It generated code correctly, but when I try to start my server, got same error:
no such Type or Enum 'share.UserBase' in Type .topic.Topic
It's weird.
Package versions:
"grpc-tools": "^1.6.6",
"grpc_tools_node_protoc_ts": "^4.1.3",
protoc --version
libprotoc 3.10.0
UPDATE:
repo: https://github.com/mrdulin/nodejs-grpc/tree/master/src
Your dynamic codegen is failing because you are not specifying the paths to search for imported .proto files. You can do this using the includeDirs option when calling protoLoader.loadSync, which works in a very similar way to the -I option you pass to protoc. In this case, you are loading the proto files from the src/protos directory, so it should be sufficient to pass the option includeDirs: [__dirname]. Then the import paths in your .proto files should be relative to that directory, just like when you use protoc.
You are probably seeing the same error when you try to use the static code generation because it is actually the dynamic codegen error; you don't appear to be removing the dynamic codegen code when trying to use the statically generated code.
However, the main problem you will face with the statically generated code is that you are only generating the TypeScript type definition files. You also need to generate JavaScript files to actually run it. The official Node gRPC plugin for proto is distributed in the grpc-tools package. It comes with a binary called grpc_tools_node_protoc, which should be used in place of protoc and automatically includes the plugin. You will still need to pass a --js_out flag to generate that code.

clearing cloudflare cache programmatically

I am trying to clear the cloudflare cache for single urls programmatically after put requests to a node.js api. I am using the https://github.com/cloudflare/node-cloudflare library, however I can't figure out how to log a callback from cloudflare. According to the test file in the same repo, the syntax should be something like this:
//client declaration:
t.context.cf = new CF({
key: 'deadbeef',
email: 'cloudflare#example.com',
h2: false
});
//invoke clearCache:
t.context.cf.deleteCache('1', {
files: [
'https://example.com/purge_url'
]
})
How can I read out the callback from this request?
I have tried the following in my own code:
client.deleteCache(process.env.CLOUDFLARE_ZONE, { "files": [url] }, function (data) {
console.log(`Cloudflare cache purged for: ${url}`);
console.log(`Callback:${data}`);
})
and:
client.deleteCache('1', {
files: [
'https://example.com/purge_url'
]
}).then(function(a,b){
console.log('helllllllooooooooo');
})
to no avail. :(
Purging Cloudflare cache by url:
var Cloudflare = require('cloudflare');
const { CF_EMAIL, CF_KEY, CF_ZONE } = process.env;
if (!CF_ZONE || !CF_EMAIL || !CF_KEY) {
throw new Error('you must provide env. variables: [CF_ZONE, CF_EMAIL, CF_KEY]');
}
const client = new Cloudflare({email: CF_EMAIL, key: CF_KEY});
const targetUrl = `https://example.com/purge_url`;
client.zones.purgeCache(CF_ZONE, { "files": [targetUrl] }).then(function (data) {
console.log(`Cloudflare cache purged for: ${targetUrl}`);
console.log(`Callback:`, data);
}, function (error) {
console.error(error);
});
You can lookup cloudflare zone this way:
client.zones.browse().then(function (zones) {
console.log(zones);
})
Don't forget to install the current client version:
npm i cloudflare#^2.4.1 --save-dev
I wrote a nodejs module to purge cache for a entire website. It scan your "public" folder, build the full url and purge it on cloudflare:
You can run it using npx:
npm install -g npx
npx purge-cloudflare-cache your#email.com your_cloudflare_key the_domain_zone https://your.website.com your/public/folder
But, you can install it and run using npm too:
npm install -g purge-cloudflare-cache
purge your#email.com your_cloudflare_key the_domain_zone https://your.website.com your/public/folder
For a public/folder tree like:
├── assets
│ ├── fonts
│ │ ├── roboto-regular.ttf
│ │ └── roboto.scss
│ ├── icon
│ │ └── favicon.ico
│ └── imgs
│ └── logo.png
├── build
│ ├── main.css
│ ├── main.js
├── index.html
It will purge cache for files:
https://your.website.com/index.html
https://your.website.com/build/main.css
https://your.website.com/build/main.js
https://your.website.com/assets/imgs/logo.png
https://your.website.com/assets/icon/favicon.ico
https://your.website.com/assets/fonts/roboto.css
https://your.website.com/assets/fonts/roboto-regular.ttf
This is probably happening because my mocha tests don't wait for the callback to return.
https://github.com/mochajs/mocha/issues/362

Resources