Neo4j - Neo4jError: Connection was closed by server - node.js

I've created a google cloud VM instance with neo4j on it, by following this instructions:
https://neo4j.com/docs/operations-manual/current/cloud-deployments/neo4j-gcp/single-instance-vm/
The browser Db looks fine: I can view and manipulate from the browser.
The problem is when trying connecting neo4j from nodejs using neo4j-driver npm.
I keep getting this error:
UnhandledPromiseRejectionWarning: Neo4jError: Connection was closed by server
this is the code:
const neo4j = require('neo4j-driver')
const uri = 'bolt://[EXTERNAL_IP]:[PORT]'
const user = 'USER_NAME'
const password = 'PASSWORD'
const createPerson = async () => {
const driver = neo4j.driver(uri, neo4j.auth.basic(user, password), { encrypted : "ENCRYPTION_OFF"})
const session = driver.session()
const personName = 'Bob'
try {
const result = await session.run(
'CREATE (a:Person {name: $name}) RETURN a',
{ name: personName }
)
const singleRecord = result.records[0]
const node = singleRecord.get(0)
console.log(node.properties.name)
} finally {
await session.close()
}
// on application exit:
await driver.close()
}
createPerson()
The code is taken from this link: https://neo4j.com/developer/javascript/
I run this command: gcloud compute ssh my-neo4j-instance in GCP shell, in hope that maybe adding ssh would solve the issue but it didn't.
changing the driver encrypted options to false also didn't help.
Console logging the error print this:
Neo4jError: Connection was closed by server
at captureStacktrace (C:\Users\user\Desktop\Home Work\MyProject\LotteryService\node_modules\neo4j-driver\lib\result.js:263:15)
at new Result (C:\Users\user\Desktop\Home Work\MyProject\LotteryService\node_modules\neo4j-driver\lib\result.js:68:19)
at Session._run (C:\Users\user\Desktop\Home Work\MyProject\LotteryService\node_modules\neo4j-driver\lib\session.js:174:14)
at Session.run (C:\Users\user\Desktop\Home Work\MyProject\LotteryService\node_modules\neo4j-driver\lib\session.js:135:19)
at createPerson (C:\Users\user\Desktop\Home Work\MyProject\LotteryService\src\index.js:12:38)
at Object.<anonymous> (C:\Users\user\Desktop\Home Work\MyProject\LotteryService\src\index.js:31:1)
at Module._compile (internal/modules/cjs/loader.js:959:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:995:10)
at Module.load (internal/modules/cjs/loader.js:815:32)
at Function.Module._load (internal/modules/cjs/loader.js:727:14) {
code: 'ServiceUnavailable',
name: 'Neo4jError'
what am i missing ?
EDIT 1:
I looked the google cloud logs. all i saw was logs for the start and stop of the vm.
Also, i fixed the code for the async function call in the main file.
so instead of the
createPerson();
i tried this:
(async() => {
console.log('before start');
await createPerson();
console.log('after start');
})();
still no good, still: Neo4jError: Connection was closed by server.
I also asked in neo4j blogs but got no answers.
EDIT 2:
I saw my issue in a different stack-overflow post. The answer there was to finish the logging to neo4j, by changing the password. but I already did that...
Also I joined the neo4j slack group but didn't got an answer still.

Related

Can connect to mongo db via batch/powershell but not programmatically (nodejs)

Using the external mongo.exe I can connect to the databases of our environments via
mongo.exe “mongodb://aaa.unix.abc:27018,bbb.unix.abc:27018,ccc.unix.abc:27018/mydb?replicaSet=myreplicaset” —authenticationMechanism=GSSAPI —authenticationDatabase=$external —username “user#NONPROD#ABC.COM” —password “password” -ssl —-sslCAFile C:\mymongostuff\ca.pem
So I have no problem whatsoever connecting via batch scripts and powershell scripts but my problem comes with trying to connect via application (whether Java or JavaScript) running on my local machine
Below test script I’m trying to run (node v14.16.0, npm v6.14.11, mongodb npm library v4.13.0 on a windows PC)
const { MongoClient } = require(‘mongodb’);
const path = require(‘path’);
const capem = path.join(__dirname,’.\\ca.pen’);
async function main() {
const uri = “mongodb://aaa.unix.abc:27018,bbb.unix.abc:27018,ccc.unix.abc:27018/mydb?replicaSet=myreplicaset”;
var mongoOpt = { sslValidate=true, sslCert= capem };
const client = new MongoClient(uri,mongoOpt);
try {
await client.connect();
await doSomething(client);
}
Running above will run for many seconds without doing anything before giving MongoServerSelectionError
Reason: TopologyDescription {
type: ‘ReplicaSetNoPrimary’,
servers….
My suspicion is the uri is correct but that I somehow need to specify the “authenticationMechanism=GSSAPI —authenticationDatabase=$external —username “user#NONPROD#ABC.COM” —password “password” -ssl —-sslCAFile C:\mymongostuff\ca.pem” outside the uri for it to be equivalent to my working batch/powershell scripts

How to update machine type of gcp instnace using nodejs client library

I have a node server which handles gcp instance operations. I am trying to update the machine type of a existing running instance. I don't want to update any other properties like disk size or anything.
const computeClient = new Compute.InstancesClient({
projectId: "project",
keyFilename: "keyfile",
});
let resource = {
instance: "testinstance",
instanceResource: {
machineType : "zones/us-central1-a/machineTypes/e2-standard-4",
name: "testinstance"
},
project: "project",
zone : "us-central1-a"
}
const resp1 = await computeClient.update(resource);
When I try to run above code this error occurs
Stacktrace:
====================
Error: Invalid value for field 'resource.disks': ''. No disks are specified.
at Function.parseHttpError (////node_modules/google-gax/build/src/googleError.js:49:37)
at decodeResponse (///node_modules/google-gax/build/src/fallbackRest.js:72:49)
at ////node_modules/google-gax/build/src/fallbackServiceStub.js:90:42
at processTicksAndRejections (node:internal/process/task_queues:96:5)
node version : v16.14.0
#google-cloud/compute version : 3.1.2
Any solution ? Any code sample to update machine type ?
If you only want to update your instance's machine type you should use the setMachineType method directly which is meant for this specifically. See example below:
// Imports the Compute library
const {InstancesClient} = require('#google-cloud/compute').v1;
// Instantiates a client
const computeClient = new InstancesClient();
const instance = "instance-name";
const instancesSetMachineTypeRequestResource = {machineType: "zones/us-central1-a/machineTypes/n1-standard-1"}
const project = "project-id";
const zone = "us-central1-a";
async function callSetMachineType() {
// Construct request
const request = {
instance,
instancesSetMachineTypeRequestResource,
project,
zone,
};
// Run request
const response = await computeClient.setMachineType(request);
console.log(response);
}
callSetMachineType();
Note that machine type can only be changed on a TERMINATED instance as documented here. You'll need to first ensure the instance is stopped or stop it in your code prior to updating machine type. More details on available methods here.

How to connect to Google Cloud SQL (PostgreSQL) from Cloud Functions?

I feel like I've tried everything. I have a cloud function that I am trying to connect to Cloud SQL (PostgreSQL engine). Before I do so, I pull connection string info from Secrets Manager, set that up in a credentials object, and call a pg (package) pool to run a database query.
Below is my code:
Credentials:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"127.0.0.1",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));
Upon running the cloud function with this code, I get the following error:
error in pool.query: Error: connect ECONNREFUSED 127.0.0.1:5432
I have attempted to update the host to the private IP of the Cloud SQL instance, and also update the host to the Cloud SQL instance name on this environment, but that is to no avail. Any other ideas?
Through much tribulation, I figured out the answer. Given that there is NO documentation on how to solve this, I'm going to put the answer here in hopes that I can come back here in 2025 and see that it has helped hundreds. In fact, I'm setting a reminder in my phone right now to check this URL on November 24, 2025.
Solution: The host must be set as:
/cloudsql/<googleProjectName(notId)>:<region>:<sql instanceName>
Ending code:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"/cloudsql/my-first-project-191923:us-east1:my-first-cloudsql-inst",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));

UnhandledPromiseRejectionWarning: TypeError: Channel credentials must be a ChannelCredentials object in GCP Batch publishing

I am trying to do batch publishing of messages using node module #google-cloud/pubsub. My batch publishing code looks like below.
const { PubSub } = require("#google-cloud/pubsub");
const grpc = require("grpc");
const createPublishEventsInBatch = (fastify, topic) => {
const pubSub = new PubSub({ grpc });
const batchPublisher = pubSub.topic(topic, {
batching: {
maxMessages: 100,
maxMilliseconds: 1000
}
});
return (logTrace, data, eventInfo, version) => {
const { entityType, eventType } = eventInfo;
fastify.log.debug({
logTrace,
eventType: eventType,
data,
message: `Publishing batch events for ${entityType}`
});
const event = createEvent(data, entityType, eventType, logTrace, version);
batchPublisher.publish(Buffer.from(JSON.stringify(event)));
fastify.log.debug({
traceHeaders: logTrace,
tenant: data.tenant,
message: "Event publish completed",
data
});
};
};
Pubsub and gRPC version as follows.
"#google-cloud/pubsub": "^2.18.1",
"grpc": "^1.24.11"
When I am publishing the message with above code I am getting the following error.
(node:6) UnhandledPromiseRejectionWarning: TypeError: Channel credentials must be a ChannelCredentials object
at new ChannelImplementation (/app/node_modules/#grpc/grpc-js/build/src/channel.js:75:19)
at new Client (/app/node_modules/#grpc/grpc-js/build/src/client.js:61:36)
at new ServiceClientImpl (/app/node_modules/#grpc/grpc-js/build/src/make-client.js:58:5)
at GrpcClient.createStub (/app/node_modules/google-gax/build/src/grpc.js:334:22)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
I am seeing this issue only in my production environment and in staging and all my lower environment, this is working fine. Can somebody please guide me to fix this issue.
Not in regards to the exception, but I wanted to mention that you'd generally want to do this once and then cache:
const pubSub = new PubSub({ grpc });
const batchPublisher = pubSub.topic(topic, {
This lets you avoid a lot of init overhead, possibly some memory leaks (from proto parsing), and lets you keep a single publishing queue (and batching) for all requests.

Not able to add entities to a azure storage table in node.js when deployed to cloud?

I am using socket.io in node.js to implement chat functionality in my azure cloud project. In it i have been adding the user chat history to tables using node.js. It works fine when i run it on my local emulator, but strangely when i deploy to my azure cloud it doesnt work and it doesnt throw up any error either so its really mind boggling. Below is my code.
var app = require('express')()
, server = require('http').createServer(app)
, sio = require('socket.io')
, redis = require('redis');
var client = redis.createClient();
var io = sio.listen(server,{origins: '*:*'});
io.set("store", new sio.RedisStore);
process.env.AZURE_STORAGE_ACCOUNT = "account";
process.env.AZURE_STORAGE_ACCESS_KEY = "key";
var azure = require('azure');
var chatTableService = azure.createTableService();
createTable("ChatUser");
server.listen(4002);
socket.on('privateChat', function (data) {
var receiver = data.Receiver;
console.log(data.Username);
var chatGUID1 = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
return v.toString(16);
});
var chatRecord1 = {
PartitionKey: data.Receiver,
RowKey: data.Username,
ChatID: chatGUID2,
Username: data.Receiver,
ChattedWithUsername: data.Username,
Timestamp: new Date(new Date().getTime())
};
console.log(chatRecord1.Timestamp);
queryEntity(chatRecord1);
}
function queryEntity(record1) {
chatTableService.queryEntity('ChatUser'
, record1.PartitionKey
, record1.RowKey
, function (error, entity) {
if (!error) {
console.log("Entity already exists")
}
else {
insertEntity(record1);
}
})
}
function insertEntity(record) {
chatTableService.insertEntity('ChatUser', record, function (error) {
if (!error) {
console.log("Entity inserted");
}
});
}
Its working on my local emulator but not on cloud and I came across a reading that DateTime variable of an entity should not be null when creating a record on cloud table. But am pretty sure the way am passing timestamp is fine, it is right? any other ideas why it might be working on local but not on cloud?
EDIT:
I hav also been getting this error when am running the socket.io server, but in spite of this error the socket.io functionality is working fine so i didnt bother to care about it. I have no idea what the error means in the first place.
{ [Error: connect ECONNREFUSED]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect' }
Couple things:
You shouldn't need to set Timestamp, the service should be populating that automatically when you insert a record.
When running it locally you can set the environment variables to the Windows Azure storage account settings and see if it will successfully write to the table when running on your developer box. Instead of running in the emulator, just set the environment variables and run the app directly with node.exe.
Are you running in a web role or worker role? I'm assuming it's a cloud service since you mentioned the emulator. If it's a worker role, maybe add some instrumentation to log to file to assist in debugging. If it's a web role you can add an iisnode.yml file in the root of the application, with the following line in the file to enable logging of stdout/stderr:
loggingEnabled: true
This will capture stdout/stderr to an iislog folder under the approot folder on e: or f: of the web role instance. You can remote desktop to the instance and look at the logs to see if the logs you have for successful insertion are occurring.
Otherwise, it's not obvious from the code above what's going on. Similar code worked fine for me. Relevant bits for my test code can be found at https://gist.github.com/Blackmist/5326756.
Hope this helps.

Resources