How to get CPU utilization in % of a NodeJS process, on EC2 instance? - node.js

I have a express app which runs on local EC2 instance. When I run my express app in the localhost I am able to get CPU % utilization with the following code -
import os from "os";
import { memoryUsage } from "process";
import osu from "node-os-utils";
import { usageMetrics } from "../utils/usagemetric";
import * as osUtil from "os-utils";
// in the controller
const cpu = osu.cpu;
usageMetrics(cpu, process, os, osUtil);
export const usageMetrics = (cpu: any, process: any, os: any, osUtil: any) => {
const totalMemory = os.totalmem();
const rss = process.memoryUsage().rss;
const totalUnusedMemory = totalMemory - rss;
const percentageMemoryUnUsed = totalUnusedMemory / totalMemory;
console.log("system memory", totalMemory);
console.log("node process memory usage", rss);
console.log("Memory consumed in %:", 100 - percentageMemoryUnUsed * 100);
cpu.usage().then((info) => {
console.log("Node-OS-utils-CPU Usage(%):", info);
});
osUtil.cpuUsage(function (v) {
console.log(" OS-Util-CPU-Usage(%): " + v);
});
os.cpuUsage(function (v) {
console.log("native OS-CPU Usage(%):", +v);
});
};
The above code work well in the localhost giving value in 43,15,20,etc in %. But when I run in the EC2 instance it is showing me 1% or 2% sometimes. The real question is both the libraries os-utils and node-os-utils are giving 0 % as CPU utilization. Any help on how do I get actual CPU utilization with help of libraries or any native NodeJS methods?
While Running on EC2
While running on localhost

Related

Jest test fail: "● default root route"

I'm trying to write Jest tests for a Fastify project. But I'm stuck with the example code failing with an ambiguous error: "● default root route".
// root.test.ts
import { build } from '../helper'
const app = build()
test('default root route', async () => {
const res = await app.inject({
url: '/'
})
expect(res.json()).toEqual({ root: true })
})
// helper.ts
import Fastify from "fastify"
import fp from "fastify-plugin"
import App from "../src/app"
export function build() {
const app = Fastify()
beforeAll(async () => {
void app.register(fp(App))
await app.ready()
})
afterAll(() => app.close())
return app
}
// console error:
FAIL test/routes/root.test.ts (8.547 s)
● default root route
A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks. Active timers can also cause this, ensure that .unref() was called on them.
What am I doing wrong?
After running --detectOpenHandles, Jest reported that open ioredis connections were timing out.
I hooked up ioredis instances to Fastify lifecycle with fastify-redis and the test passed.

Next.js: Calling import from different endpoint executes the code again

Here's the simplified example I made for this question.
Let's say I want to keep a state on the server side.
components/dummy.ts
console.log('init array')
let num: number = 0
const increment: () => number = () => num++
export { increment }
Also I have two end-points, p1 and p2 that I want to share that state.
pages/api/x/p1.ts
import type { NextApiRequest, NextApiResponse } from 'next'
import { increment } from '../../../components/dummy'
export default function handler(
req: NextApiRequest,
res: NextApiResponse<number>
) {
res.status(200).json(increment())
}
pages/api/x/p2.ts
import type { NextApiRequest, NextApiResponse } from 'next'
import { increment } from '../../../components/dummy'
export default function handler(
req: NextApiRequest,
res: NextApiResponse<number>
) {
res.status(200).json(increment())
}
These are are the two APIs, and then I have some pages fetching the same using getServerSideProps
pages/x/p3.tsx
import { GetServerSideProps } from 'next'
import React from 'react'
import { increment } from '../../components/dummy'
interface CompProps {
num: number
}
const Comp: React.FC<CompProps> = ({ num }) => <>{num}</>
export default Comp
export const getServerSideProps: GetServerSideProps = async ({}) => ({
props: {
num: increment(),
},
})
pages/x/p4.tsx
import { GetServerSideProps } from 'next'
import React from 'react'
import { increment } from '../../components/dummy'
interface CompProps {
num: number
}
const Comp: React.FC<CompProps> = ({ num }) => <>{num}</>
export default Comp
export const getServerSideProps: GetServerSideProps = async ({}) => ({
props: {
num: increment(),
},
})
So basically 2 issues.
On Dev (yarn dev)
Now when I hit api/x/p1 I get a 0 then 1, 2, 3.
But now when I hit api/x/p2 I get a 0, and now p1 is also reset to this new value, from this point on, p1 and p2 are sharing the same state. I can alternate between p1 and p2 and get a constant increment.
What I want to understand here is the nature of import.
How can I avoid the code to run again with each import coming from a new end-point.
On Prod (yarn build && yarn start)
On prod it's better because api/x/p1 and api/x/p2 share the same state.
But by using GetServerSideProps api/p3 and api/p4 share the same state between themselves. But that is a different one shared by p1 and p2.
So basically using the /api routes and GetServerSideProps have their own state that's not being shared.
I couldn't reproduce you issue.
I've created a sample project in order to check the module behavior, and have confirmed that the modules are imported only once, cached, keeping state.
Check it out:
/* index.js */
import './mod1.js'
import './mod2.js'
/* mod1.js */
import { increment } from "./shared.js";
setInterval(() => {
increment()
}, 2000);
/* mod2.js */
import { increment } from "./shared.js";
setInterval(() => {
increment()
}, 2000);
/* shared.js */
let num = 0
const increment = () => {
num++
console.log(num)
}
export { increment }
node index.js
outputs:
1
2
3
4
5
6
7
8
^C
My guess: When you're running the app in development mode, NextJs compiles modules on demand, when it's needed. So, when you go to the first route, you can see in stdout (vscode console) nextjs printing logs compiling files. When it finishes, there's a feature in development mode called hot-reload which will load automagically the new compiled module into memory. Then, when you go to the second route, the other module starts compiling, and when is ready, Next Js will hot reload this new fresh module into memory. This sometimes may cause the state of the app to be reset (unload modules then reload it again). I guess this is whats happening. To confirm, you can try run the builded app (next build command). Then there will be no hot reload when running the compiled/bundle application, so no state will be changed.

trying to get cpu temperature systeminformation npm library

im trying to get CPU temperature using systeminformation in node.js, but getting null values in the output.
i also installed sensors before getting cpu information
this application is running on AWS EC2 instance
import si from 'systeminformation'
import { execSync } from 'child_process'
let output=execSync('sudo apt-get install lm-sensors')
const temperature = await si.cpuTemperature()
console.log(temperature)
output:
{ main: null, cores: [], max: null, socket: [], chipset: null }
AWS EC2 won't disclose its CPU temperature. But this should work in any local machine.

Gatsby preview server in a serverless/stateless environment

Gatsby has documentation on how to setup a preview server here
Problem is it requires a server running 24/7 listening to requests, I would like to achieve the same result but in a serverless setup (AWS Lambda to be more specific) since we would need a preview very rarely.
The context here is using Gatsby with Wordpress as a headless data backend, and I want to implement a custom preview link in Wordpress for previewing posts before publishing them.
So far, there are two main setbacks :
Size, currently the size of node_modules for a Gatsby starter with Wordpress is 570mb
Speed, stateless means every preview request would be running gatsby develop again
I honestly don't know a solution for size here, not sure how to strip down packages.
As for speed, maybe there's a low level Gatsby API function to directly render a page to HTML? For example, a Node.js Lambda code could look like this (buildPageHTML is a hypothetical function I'm trying to find)
import buildPageHTML from "gatsby"
exports.handler = async function(event) {
const postID = event.queryStringParameters.postID
return buildPageHTML(`/preview_post_by_id/${postID}`)
}
Any ideas on how to go on about this?
Running Gatsby in an AWS Lambda
Try this lamba (from this beautiful tutorial) :
import { Context } from 'aws-lambda';
import { link } from 'linkfs';
import mock from 'mock-require';
import fs from 'fs';
import { tmpdir } from 'os';
import { runtimeRequire } from '#/utility/runtimeRequire.utility';
import { deployFiles } from '#/utility/deployFiles.utility';
/* -----------------------------------
*
* Variables
*
* -------------------------------- */
const tmpDir = tmpdir();
/* -----------------------------------
*
* Gatsby
*
* -------------------------------- */
function invokeGatsby(context: Context) {
const gatsby = runtimeRequire('gatsby/dist/commands/build');
gatsby({
directory: __dirname,
verbose: false,
browserslist: ['>0.25%', 'not dead'],
sitePackageJson: runtimeRequire('./package.json'),
})
.then(deployFiles)
.then(context.succeed)
.catch(context.fail);
}
/* -----------------------------------
*
* Output
*
* -------------------------------- */
function rewriteFs() {
const linkedFs = link(fs, [
[`${__dirname}/.cache`, `${tmpDir}/.cache`],
[`${__dirname}/public`, `${tmpDir}/public`],
]);
linkedFs.ReadStream = fs.ReadStream;
linkedFs.WriteStream = fs.WriteStream;
mock('fs', linkedFs);
}
/* -----------------------------------
*
* Handler
*
* -------------------------------- */
exports.handler = (event: any, context: Context) => {
rewriteFs();
invokeGatsby(context);
};
find the source here

Pushing process to background causes high kswapd0

I have a cpu-intensive process running on a raspberry pi that's executed by running a nodejs file. Running the first command (below) and then running the file on another tab works just fine. However when I run the process via a bash shell script, the process stalls.
Looking at the processes using top I see that kswapd0 and kworker/2:1+ takes over most of the cpu. What could be causing this?
FYI, the first command begins the Ethereum discovery protocol via HTTP and IPC
geth --datadir $NODE --syncmode 'full' --port 8080 --rpc --rpcaddr 'localhost' --rpcport 30310 --rpcapi 'personal,eth,net,web3,miner,txpool,admin,debug' --networkid 777 --allow-insecure-unlock --unlock "$HOME_ADDRESS" --password ./password.txt --mine --maxpeers 100 2> results/log.txt &
sleep 10
# create storage contract and output result
node performanceContract.js
UPDATE:
performanceContract.js
const ethers = require('ethers');
const fs = require('fs')
const provider = new ethers.providers.IpcProvider('./node2/geth.ipc')
const walletJson = fs.readFileSync('./node2/keystore/keys', 'utf8')
const pwd = fs.readFileSync('./password.txt', 'utf8').trim();
const PerformanceContract = require('./contracts/PerformanceContract.json');
(async function () {
try {
const wallet = await ethers.Wallet.fromEncryptedJson(walletJson, pwd)
const connectedWallet = wallet.connect(provider)
const factory = new ethers.ContractFactory(PerformanceContract.abi, PerformanceContract.bytecode, connectedWallet)
const contract = await factory.deploy()
const deployedInstance = new ethers.Contract(contract.address, PerformanceContract.abi, connectedWallet);
let tx = await deployedInstance.loop(6000)
fs.writeFile(`./results/contract_result_xsmall_${new Date()}.txt`, JSON.stringify(tx, null, 4), () => {
console.log('file written')
})
...
Where loop is a method that loops keccak256 encryption method. It's purpose is to test diffent gas costs by alternating the loop #.
Solved by increasing the sleep time to 1min. Assume it was just a memory issue that need more time before executing the contract.

Resources