Sophos interfering with NodeJS processes on Mac OSX Big Sur - node.js

Ever since I upgraded to Big Sur, I've noticed that Sophos has begun to interfere dramatically whenever I run Jest tests. CPU usage for Sophos spikes to around 400% when running even a modest Jest test program, with 71 tests currently taking 98.6 seconds to run. After the Jest process completes, Sophos goes back to sleep and no longer takes up substantial resources.
I run these tests from my terminal using Node. My hypothesis is that the problem is actually between Node and Sophos, and it's just exacerbated by the way that Jest runs its tests.
Has anyone come across this problem before, and is there anything I can do to convince Sophos to leave Node alone?
For what it's worth, the tests themselves are bog-standard JS and React unit tests, with the React tests written using React Testing Library.

Related

node.js CPU usage spikes

I have an express.js app running in cluster mode (nodejs cluster module) on a linux production server. I'm using PM2 to manage this app. It usually uses less than 3% CPU. However, sometimes the CPU usage spikes up to 100% for a short period of time (less than a minute). I'm not able to reproduce this issue. This only happens once a day or two.
Is there any way to find out which function or route is causing the sudden CPU spikes, using PM2? Thanks.
i think have some slow synchronous execution on some request in your application.
add log every request income on middleware and store to elastic search and find what request have long response time or use newrelic (easy way but spend more money).
use blocked-at to find slow synchronous execution if detect try to use worker threads or use lib workerpool
My answer is based purely on my experience with the topic
Before going to production make local testing like:
stress testing.
longevity testing.
For both tests try to use tool like JMeter where you can put your one/multiple endpoints and run loads of them in period of time while monitoring CPU & MEMORY Usage.
If everything is fine, try also to stop the test and run the api manually try to monitor its behavior, this will help you if there is
memory leak from the APIs themselves
Is your app going through .map() , .reduce() for huge arrays?
Is your app is working significantly better after reboot?
if yes, then you need to suspect that the express app experiencing memory leak and Garbage collector trying to clean the mess.
If it's possible, try to rewrite the app using fastify, personally, this did not make the app much faster, but able to handle 1.5X more requests.

How to find currently running code in Node.js

I have a Node.js/Express web application which sometimes slow response. After checked the system CPU and memory I found it consumed ~80% CPU and memory, and then 1 - 2 minutes later they down to ~10%.
I think this was because my Node.js is running some codes in user-thread, for example mapping objects retrieved from database.
It's a little bit hard to review the code of my application to figure out where the bad code was. So I would like to know is there any tool or npm module I can use to write down the code Node.js is running when an API request was processed longer than, for example 5 seconds.
I tried v8-profiler but it seems that it only support to start profiling and then stop, but not capture what code is running at that moment.
use visual studio code for debuging your nodejs code
https://code.visualstudio.com/b?utm_expid=101350005-24.YOq70TI1QcW9kAbMwmhePg.1&utm_referrer=https%3A%2F%2Fwww.google.com.pk%2F

Testing Cluster in Node.js

I'm currently using Mocha for testing but I seem to be running into some errors testing an app that uses Cluster.
Basically, the app exits, but then some of the workers still do things, and this causes weird output which is basically failing the "before all" hooks, even after the tests finished.
I saw this thread How to test a clustered Express app with Mocha?
but I wonder if Mocha is even the right module to test a Cluster app with. If so, can someone please point me to a tutorial on how to do it? I couldn't find any after Googling.
I am also using Express in case that complicates things.

node.js hangs other programs on my mac

I'm relatively new to Node and javascript. I'm running a program that does heavy network api calls and process the results. What I'm experiencing is that my node code is making other programs running on my mac (Outlook, Chrome, etc.) unresponsive to the point I can't even force quit those programs and have to hard reboot my machine.
Any idea why that's happening? I thought node.js is somewhat sandboxed and shouldn't affect other programs. Is it node using up all sockets available?
seems I've found the reason for node.js itself to use a lot of CPU and memory. I have some code processing thousands rows of user location and calculating the distances. That seems to be very expensive and have a big toll on node. I've moved that code into process.nextTick() and it's not hanging other programs any more.
What I still don't understand is that why node hangs, why it hangs up other programs on my mac as well.

Node.js Clusters with Additional Processes

We use clustering with our express apps on multi cpu boxes. Works well, we get the maximum use out of AWS linux servers.
We inherited an app we are fixing up. It's unusual in that it has two processes. It has an Express API portion, to take incoming requests. But the process that acts on those requests can run for several minutes, so it was build as a seperate background process, node calling python and maya.
Originally the two were tightly coupled, with the python script called by the request to upload the data. But this of course was suboptimal, as it would leave the client waiting for a response for the time it took to run, so it was rewritten as a background process that runs in a loop, checking for new uploads, and processing them sequentially.
So my question is this: if we have this separate node process running in the background, and we run clusters which starts up a process for each CPU, how is that going to work? Are we not going to get two node processes competing for the same CPU. We were getting a bit of weird behaviour and crashing yesterday, without a lot of error messages, (god I love node), so it's bit concerning. I'm assuming Linux will just swap the processes in and out as they are being used. But I wonder if it will be problematic, and I also wonder about someone getting their web session swapped out for several minutes while the longer running process runs.
The smart thing to do would be to rewrite this to run on two different servers, but the files that maya uses/creates are on the server's file system, and we were not given the budget to rebuild the way we should. So, we're stuck with this architecture for now.
Any thoughts now possible problems and how to avoid them would be appreciated.
From an overall architecture prospective, spawning 1 nodejs per core is a great way to go. You have a lot of interdependencies though, the nodejs processes are calling maya which may use mulitple threads (keep that in mind).
The part that is concerning to me is your random crashes and your "process that runs in a loop". If that process is just checking the file system you probably have a race condition where the nodejs processes are competing to work on the same input/output files.
In theory, 1 nodejs process per core will work great and should help to utilize all your CPU usage. Linux always swaps the processes in and out so that is not an issue. You could start multiple nodejs per core and still not have an issue.
One last note, be sure to keep an eye on your memory usage, several linux distributions on EC2 do not have a swap file enabled by default, running out of memory can be another silent app killer, best to add a swap file in case you run into memory issues.

Resources