How to capture the node name in load runner - performance-testing

I have 4 nodes application server (Node 1,2,3 & 4) When ever i run the script the request goes to one of that node. How do i capture to which node the request goes in load runner?

Related

Lots of "Uncaught signal: 6" errors in Cloud Run

I have a Python (3.x) webservice deployed in GCP. Everytime Cloud Run is shutting down instances, most noticeably after a big load spike, I get many logs like these Uncaught signal: 6, pid=6, tid=6, fault_addr=0. together with [CRITICAL] WORKER TIMEOUT (pid:6) They are always signal 6.
The service is using FastAPI and Gunicorn running in a Docker with this start command
CMD gunicorn -w 2 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8080 app.__main__:app
The service is deployed using Terraform with 1 gig of ram, 2 cpu's and the timeout is set to 2 minutes
resource "google_cloud_run_service" <ressource-name> {
name = <name>
location = <location>
template {
spec {
service_account_name = <sa-email>
timeout_seconds = 120
containers {
image = var.image
env {
name = "GCP_PROJECT"
value = var.project
}
env {
name = "BRANCH_NAME"
value = var.branch
}
resources {
limits = {
cpu = "2000m"
memory = "1Gi"
}
}
}
}
}
autogenerate_revision_name = true
}
I have already tried tweaking the resources and timeout in Cloud Run, using the --timeout and --preload flag for gunicorn as that is what people always seem to recommend when googling the problem but all without success. I also dont exactly know why the workers are timing out.
Extending on the top answer which is correct, You are using GUnicorn which is a process manager that manages Uvicorn processes which runs the actual app.
When Cloudrun wants to shutdown the instance (due to lack of requests probably) it will send a signal 6 to process 1. However, GUnicorn occupies this process as the manager and will not pass it to the Uvicorn workers for handling - thus you receive the Unhandled signal 6.
The simplest solution, is to run Uvicorn directly instead of through GUnicorn (possibly with a smaller instance) and allow the scaling part to be handled via Cloudrun.
CMD ["uvicorn", "app.__main__:app", "--host", "0.0.0.0", "--port", "8080"]
Unless you have enabled CPU is always allocated, background threads and processes might stop receiving CPU time after all HTTP requests return. This means background threads and processes can fail, connections can timeout, etc. I cannot think of any benefits to running background workers with Cloud Run except when setting the --cpu-no-throttling flag. Cloud Run instances that are not processing requests, can be terminated.
Signal 6 means abort which terminates processes. This probably means your container is being terminated due to a lack of requests to process.
Run more workloads on Cloud Run with new CPU allocation controls
What if my application is doing background work outside of request processing?
This error happens when a background process is aborted. There are some advantages of running background threads on cloud just like for other applications. Luckily, you can still use them on Cloud Run without processes getting aborted. To do so, when deploying, chose the option "CPU always allocated" instead of "CPU only allocated during request processing"
For more details, check https://cloud.google.com/run/docs/configuring/cpu-allocation

How can file writes with NodeJS on Docker be inconsistent?

A while back I've created some feed fetching & processing scripts with NodeJS for an application I'm working on. I've been running this script on my local machine (OSX) manually for a while and momentarily I'm working on having a job do this. I figured I'd go with a Docker droplet ($5/mo) on Digital ocean. I've created a Dockerfile with NodeJS (9), cron and everything I think I need. Everything works fine on my local machine when I build and run the docker container.
However, when I deploy it to Digital Ocean there seems to be different behaviour from running it locally. Exactly the thing I wanted to prevent using Docker. I have a main shell script that calls 8 different node scripts sequentially. On Digital Ocean it seems that some NodeJS-scripts exit prematurely. So they haven't finished yet but the mail shell script continues with the next NodeJS script. An example:
execute() {
const outputPath = path.join(this._feedDir, this._outputFile);
return createDirIfNotExists(this._feedDir, this._log)
.then(() => this._doRequests())
.then(requestData => this._writeFile(outputPath, JSON.stringify({requests: requestData.map(data => JSON.parse(data))}), this._log))
}
When running the code:
Script does create the dir
Script does do all the requests to external sources (and accumulates the data)
Script sometimes write the data to file, and sometimes not. Without any error code.
Main (shell) script always picks up after that.
Any thoughts?

node app instance name when running via pm2 cluster

I have backend node app, that is run by pm2 in cluster mode.
I'm running fixed 2 instances.
Is there a way to identify instance name or number from within executed app?
App name is "test", I would like to get from within the app "test 1" and "test 2" for given instance.
Thanks!
You'll need to use two environment variables set by pm2:
process.env.pm_id is automatically set to the instance id (0, 1, ...).
process.env.name is set to the app name (in your case test).
While starting the pm2 set the name as:
pm2 start app.js --name test

Add worker to PM2 pool. Don't reload/restart existing workers

Env.: Node.js on Ubuntu, using PM2 programmatically.
I have started PM2 with 3 instances via Node on my main code. Suppose I use the PM2 command line to delete one of the instances. Can I add back another worker to the pool? Can this be done without affecting the operation of the other workers?
I suppose I should use the start method:
pm2.start({
name : 'worker',
script : 'api/workers/worker.js', // Script to be run
exec_mode : 'cluster', // OR FORK
instances : 1, // Optional: Scale your app by 4
max_memory_restart : '100M', // Optional: Restart your app if it reaches 100Mo
autorestart : true
}, function(err, apps) {
pm2.disconnect();
});
However, if you use pm2 monit you'll see that the 2 existing instances are restarted and no other is created. Result is still 2 running instances.
update
it doesn't matter if cluster or fork -- behavior is the same.
update 2 The command line has the scale option ( https://keymetrics.io/2015/03/26/pm2-clustering-made-easy/ ), but I don't see this method on the programmatic API documentation ( https://github.com/Unitech/PM2/blob/master/ADVANCED_README.md#programmatic-api ).
I actually think this can't be done in PM2 as I have the exact same problem.
I'm sorry, but I think the solution is to use something else as PM2 is fairly limited. The lack of ability to add more workers is a deal breaker for me.
I know you can "scale" on the command line if you are using clustering but I have no idea why you can not start more instances if you are using fork. It makes no sense.
As I know, all commands of PM2 can also be used programmatically, including scale. Check out CLI.js to see all available methods.
Try to use the force attribute in the application declaration. If force is true, you can start the same script several times, which is usually not allowed by PM2 (according to the Application Declaration
docs)
By the way, autorestart it's true by default.
You can do so by use of a ecosystem.config file.
Inside that file you can specify as much worker processes as you want.
E.g. we used BullJS to develop a microservice architecture of different workers that are started with the help of PM2 on multiple cores: The same worker started as named instances multiple times.
Now when jobs are run BullJS load balances the workloads for one specific worker on all available instances for that worker.
You could of course start or stop any instance via CLI and also start additional named workers via the command line to increase the amount of workers (e.g. if many jobs need to be run and you want to process more jobs at a time):
pm2 start './script/to/start.js' --name additional-worker-4
pm2 start './script/to/start.js' --name additional-worker-5

node-inspector: debug and step through child_process

I have been using node-inspector to step through my code and I like it.
However, I am not able to step through forked processes :
... my code ...
var a = getValue();
var b = func1(a);
var command = 'myCommand.js';
child_process.spawn(command, [args], [options]);
I am able to step through code until I reach the child_process statement. Is there a way to step into that function and debug the execution of the command ?
Debugging forked processes is not supported out of the box.
You need to:
Instruct the forked process to start the debugger and to start it on a different port than the master process is listenging. See Node's lib/cluster.js for an example how to implement this part.
Open a new instance of Node Inspector UI (front-end) to debug the child process. You can reuse the same Node Inspector server, just change the value of the ?port= parameter to match the port where the debugger in your child process is listening on.

Resources