Slowness of node/deno postgres client on mac - node.js

I have encountered peculiar slowness on mac, when using node-postgres or deno-postgres. I have a very simple table with two columns, and when I execute query select * from table it happens very very slowly. I have also tried selecting directly with SQL client and it is very fast.
So to be precise - the table has 60 entries. two columns. on the remote postgres server (12.2)
I have the following three scripts.
#node v13.12.0
const { Client } = require('pg')
const client = new Client({
user: 'u',
host: 'address',
database: 'db',
password: 'pw',
port: 5432,
})
client.connect()
const start = Date.now();
client.query('SELECT * from unit', (err, res) => {
const ms = Date.now() - start;
console.log(`db call ${ms}`);
console.log(res.rows.length);
client.end()
})
#deno 1.1.2
#v8 8.5.216
#typescript 3.9.2
import { Client } from "https://deno.land/x/postgres#v0.4.2/mod.ts";
const client = new Client({
user: "u",
database: "db",
hostname: "addr",
password: "pw",
port: 5432,
});
await client.connect();
const start = Date.now();
const dataset = await client.query("SELECT * FROM unit");
const ms = Date.now() - start;
console.log(`db call ${ms}`);
console.log(dataset.rowsOfObjects().length)
#python 3.7.7
import psycopg2
from datetime import datetime
#try:
connection = psycopg2.connect(user = "u",
password = "p",
host = "addr",
port = "5432",
database = "db")
cursor = connection.cursor()
a = datetime.now()
cursor.execute("select * from unit");
records = cursor.fetchall()
b = datetime.now()
c = b - a
print(len(records))
print(c.total_seconds() * 1000)
and when I execute all three scripts on my macos (10.15.5) I get the following results:
"select * from unit" (60 records)
node ~16'000ms
deno ~16'000ms
python ~240ms
when I execute "select * from unit limit 5"
node ~480ms
deno ~110ms
python ~220ms
when I execute "select * from unit" on the same ubuntu server where postgres is installed then all 3 scripts execute in around 10ms.
I have enabled timing and full logging in the postgres server, and I see that I can see that queries in all the above situations have executed in below one milisecond, around ~0.600ms
At this point, I have feeling that fault lies into intersection of node/deno and my macos, which could probably be v8. or something else that deno and node share.
So, what could it be?
p.s I also tried node profiler and I see this:
[Summary]:
ticks total nonlib name
0 0.0% 0.0% JavaScript
116 84.7% 99.1% C++
22 16.1% 18.8% GC
20 14.6% Shared libraries
1 0.7% Unaccounted
[C++ entry points]:
ticks cpp total name
45 54.9% 32.8% T __ZN2v88internal32Builtin_DatePrototypeSetUTCHoursEiPmPNS0_7IsolateE
36 43.9% 26.3% T __ZN2v88internal21Builtin_HandleApiCallEiPmPNS0_7IsolateE
1 1.2% 0.7% T __ZN2v88internal23Builtin_DateConstructorEiPmPNS0_7IsolateE
but I have no idea what that might mean.

ok, I finally figured it out.
As nothing was working I decided to move my API to the remote server instead of running it locally, started it up, and was pleased to see instant communication between API and database... only to see exactly the same slowness on the frontend running on my machine.
And this is when it dawned on me - this is some sort of traffic shaping from my internet provider. I turned on VPN and everything started working as expected immediately.
No wonder I couldn't understand why it was getting stuck. The issue was way down the stack, this will be a lesson for me - always have to think outside the box that is a computer itself.
this explains why it was sometimes working normally. However, it doesn't explain why this issue never affected python script - maybe it was communicating with the Postgres server in a little bit different manner that didn't trigger the provider's filter. Who knows.

Related

Pushing process to background causes high kswapd0

I have a cpu-intensive process running on a raspberry pi that's executed by running a nodejs file. Running the first command (below) and then running the file on another tab works just fine. However when I run the process via a bash shell script, the process stalls.
Looking at the processes using top I see that kswapd0 and kworker/2:1+ takes over most of the cpu. What could be causing this?
FYI, the first command begins the Ethereum discovery protocol via HTTP and IPC
geth --datadir $NODE --syncmode 'full' --port 8080 --rpc --rpcaddr 'localhost' --rpcport 30310 --rpcapi 'personal,eth,net,web3,miner,txpool,admin,debug' --networkid 777 --allow-insecure-unlock --unlock "$HOME_ADDRESS" --password ./password.txt --mine --maxpeers 100 2> results/log.txt &
sleep 10
# create storage contract and output result
node performanceContract.js
UPDATE:
performanceContract.js
const ethers = require('ethers');
const fs = require('fs')
const provider = new ethers.providers.IpcProvider('./node2/geth.ipc')
const walletJson = fs.readFileSync('./node2/keystore/keys', 'utf8')
const pwd = fs.readFileSync('./password.txt', 'utf8').trim();
const PerformanceContract = require('./contracts/PerformanceContract.json');
(async function () {
try {
const wallet = await ethers.Wallet.fromEncryptedJson(walletJson, pwd)
const connectedWallet = wallet.connect(provider)
const factory = new ethers.ContractFactory(PerformanceContract.abi, PerformanceContract.bytecode, connectedWallet)
const contract = await factory.deploy()
const deployedInstance = new ethers.Contract(contract.address, PerformanceContract.abi, connectedWallet);
let tx = await deployedInstance.loop(6000)
fs.writeFile(`./results/contract_result_xsmall_${new Date()}.txt`, JSON.stringify(tx, null, 4), () => {
console.log('file written')
})
...
Where loop is a method that loops keccak256 encryption method. It's purpose is to test diffent gas costs by alternating the loop #.
Solved by increasing the sleep time to 1min. Assume it was just a memory issue that need more time before executing the contract.

Query a remote server's operating system

I'm writing a microservice in Node.js, that runs a particular command line operation to get a specific piece of information. The service runs on multiple server, some of them on Linux, some on Windows. I'm using ssh2-exec to connect to the servers and execute a command, however, I need a way of determining the server's OS to run the correct command.
let ssh2Connect = require('ssh2-connect');
let ssh2Exec = require('ssh2-exec');
ssh2Connect(config, function(error, connection) {
let process = ssh2Exec({
cmd: '<CHANGE THE COMMAND BASED ON OS>',
ssh: connection
});
//using the results of process...
});
I have an idea for the solution: following this question, run some other command beforehand, and determine the OS from the output of said command; however, I want to learn if there's a more "formal" way of achieving this, specifically using SSH2 library.
Below would be how i would think it would be done...
//Import os module this will allow you to read the os type the app is running on
const os = require('os');
//define windows os in string there is only one but for consistency sake we will leave it in an array *if it changes in the future makes it a bit easier to add to an array the remainder of the code doesn't need to change
const winRMOS = ['win32']
//define OS' that need to use ssh protocol *see note above
const sshOS = ['darwin', 'linux', 'freebsd']
// ssh function
const ssh2Connect = (config, function(error, connection) => {
let process = ssh2Exec({
if (os.platform === 'darwin') {
cmd: 'Some macOS command'
},
if (os.platform === 'linux') {
cmd: 'Some linux command'
},
ssh: connection
});
//using the results of process...
});
// winrm function there may but some other way to do this but winrm is the way i know how
const winRM2Connect = (config, function(error, connection) => {
let process = ssh2Exec({
cmd: 'Some Windows command'
winRM: connection
});
//using the results of process...
});
// if statements to determine which one to use based on the os.platform that is returned.
if (os.platform().includes(sshOS)){
ssh2Connect(config)
} elseif( os.platform().includes(winrmOS)){
winrm2Connect(config)
}

Nodejs postgres pg syntax error

As the title says, I'm running into a syntax error using node-postgres. Here's what the code looks like
const {Pool, Client} = require('pg')
const pool = new Pool({
user: '<user>',
host: '<host>',
database: '<database>',
password: '<pw>',
port: <port>
})
let query = `SELECT * FROM user JOIN notifications ON user.user_id = notifications.user_id WHERE user_id=$1`
let values = ["123"]
pool.query(query, values)
.then(() => { /* do something */} )
.catch((err) => { console.log(err)} )
Based on this query, I get a syntax error with the message
syntax error at or near "."
Since the same query runs fine in pgAdmin, my guess is that it's module specific, but I haven't figured out what the problem is.
Any help much appreciated!
Edit: added missing bracket, thanks to Sreeragh A R
user is reserved word in postgresql you have to escape user using double quotes
let query = `SELECT * FROM "user" JOIN notifications ON "user".user_id = notifications.user_id WHERE "user".user_id=$1`

NodeJS - How to handle "listen EADDRINUSE" when accessing external process

I'm using phantomJS for printing PDF, with phantomjs-node module. It works well but when I try to create several files at once, it throws an Unhandled error "Listen EADDRINUSE.
I assume this is because the module uses phantomJS which is an external process and it can't bind it to the same port several times ?
Anyway, I can't catch this error, and I'd like to resolve this problem at least by avoiding a server crash when this happens.
I thought of using a "global" variable, like a locker, in order to block concurrent calls until the current one is finished.
Any idea of how to implement that, or any other solution ?
The code from #AndyD is not correct imho. See lines 45 - 54 in
https://github.com/sgentle/phantomjs-node/blob/master/phantom.coffee
So the example should be
var portscanner = require('portscanner');
var phantom = require('phantom');
portscanner.findAPortNotInUse(40000, 60000, 'localhost', function(err, freeport) {
phantom.create({'port': freeport}, function(ph){
...
}
});
You should be able to pass in a port number every time you call create:
var phantom = require('phantom');
phantom.create(null, null, function(ph){
}, null, 11111);
You can then use a counter to ensure it's different every time you start phantomjs-node.
If you are starting a new process every time and you can't share a counter then you can use portscanner to find a free port:
var portscanner = require('portscanner');
var phantom = require('phantom');
portscanner.findAPortNotInUse(40000, 60000, 'localhost', function(err, freeport) {
phantom.create(null, null, function(ph){
...
}
}, null, freeport);

Node.js cluster + collective

I have submitted the issue to the github repo, so as to track it there!
I'm running a clustered app that could be on a machine with N cores. Let's say I am running 2 of the app instances locally for testing, really emulating 2 different boxes. So N cores on N machines using the cluster module (in reality, the N machines is static, e.g. just 2 behind an AWS Load Balancer).
How do I properly configure the collective.js "all_hosts" option for this? Would I use process.id somehow along with IP?
Running the code snippets would be something along the lines of 2 bash terminals:
terminal 1:
coffee cluster1
terminal 2:
coffee cluster2
Note: the code below works, but doesn't really work, as I can't quite figure out the configuration; each time I log data it's specific to the process.
cluster1.coffee:
cluster = require 'cluster'
numCPUs = require('os').cpus().length
if cluster.isMaster
i = 0
cluster.setupMaster
exec: './server1'
console.log "App 1 clustering with: #{numCPUs} clusters"
while i < numCPUs
cluster.fork()
i++
cluster.on 'fork', (worker) ->
console.log 'Forked App 1 server worker ' + worker.process.pid
server1.coffee:
Collective = require 'collective'
all_hosts = [
host: 'localhost', port: 8124 # Wrong
]
collective = new Collective(
host: 'localhost'
port: 8124
, all_hosts, (collective) ->
)
collectiveUpsert = () ->
num = Math.floor((Math.random()*10000)+1)
data =
num: num
console.log process.pid + ' sees current num as: ' + JSON.stringify(collective.get('foo.bar'))
console.log process.pid + ' setting num to: ' + JSON.stringify(data)
collective.set 'foo.bar', data
setInterval (->
collectiveUpsert()
), 5 * 1000
cluster2.coffee:
cluster = require 'cluster'
numCPUs = require('os').cpus().length
if cluster.isMaster
i = 0
cluster.setupMaster
exec: './server2'
console.log "App 2 clustering with: #{numCPUs} clusters"
while i < numCPUs
cluster.fork()
i++
cluster.on 'fork', (worker) ->
console.log 'Forked App 2 server worker ' + worker.process.pid
server2.coffee:
Collective = require 'collective'
all_hosts = [
host: 'localhost', port: 8124 # Wrong
]
collective = new Collective(
host: 'localhost'
port: 8124
, all_hosts, (collective) ->
)
collectiveUpsert = () ->
num = Math.floor((Math.random()*10000)+1)
data =
num: num
console.log process.pid + ' sees current num as: ' + JSON.stringify(collective.get('foo.bar'))
console.log process.pid + ' setting num to: ' + JSON.stringify(data)
collective.set 'foo.bar', data
setInterval (->
collectiveUpsert()
), 5 * 1000
In order to use collective.js with cluster and/or multiple servers you need to start it on every Node.js child process. Think of it as a http module, where you have to create the listener on every child/slave, not the master (http://nodejs.org/api/cluster.html#cluster_cluster). Following similar logic, for collective.js, you should do something like this (single server):
if (cluster.isMaster) {
// fork n children
} else {
var current_host = {host: "localhost", port: 10000};
current_host.port += cluster.worker.id; // this is incremented for every new process.
var all_hosts = [
{"host": "localhost", "port": 10001},
{"host": "localhost", "port": 10002},
{"host": "localhost", "port": 10003},
{"host": "localhost", "port": 10004},
{"host": "localhost", "port": 10005},
{"host": "localhost", "port": 10006}
// must be the same amount as is the child process count.
];
var collective = new modules.collective(current_host, all_hosts, function (collective) {
// Do your usual stuff. Start http listener, etc...
});
}
You should modify localhost to your ip addresses and make sure ports increment properly, if you want to use this on different servers.
For any additional information you can check crude tests at test/index.js
Hope that helps! If you need any further assistance - please ask.
P.S. Admittedly, this way is to cumbersome and needs clearer explanation. I hope to figure out a cleaner and easier initialization process in the near future. In addition to that, clarify the readme and provide some full examples.

Resources