OrbitDB can't replicate database on different peer - node.js

I have forced with problem in my p2p database orbitdb. Everything works fine while I did not move the database to another server.
const ipfsOptions = {
repo: './ipfs'
}
const ipfs = await IPFS.create(ipfsOptions)
const orbitdb = await OrbitDB.createInstance(ipfs, { directory: './orbitdb' })
try {
publicDB = await orbitdb.open("/orbitdb/zdpuB2kVAbJEk1aZBeeKcwz2ehfDaMWi3upkD3ZHwb15zesLF/hub")
console.log(publicDB.get('hello'))
} catch (err) {
console.log(err)
}
I created an orbit db on another computer and wanted to open from my computer, but it doesn't work, got TimeoutError: request timed out.
TimeoutError: request timed out
at maybeThrowTimeoutError (D:\Source\iprs-node\node_modules\ipfs-core-utils\cjs\src\with-timeout-option.js:35:15)
at D:\Source\iprs-node\node_modules\ipfs-core-utils\cjs\src\with-timeout-option.js:78:9
at runNextTicks (internal/process/task_queues.js:60:5)
at processTimers (internal/timers.js:497:9)
at async Object.read (D:\Source\iprs-node\node_modules\orbit-db-io\index.js:59:17)
at async OrbitDB.open (D:\Source\iprs-node\node_modules\orbit-db\src\OrbitDB.js:452:22)
at async initOrbit (D:\Source\iprs-node\services\orbitdb\index.js:26:20)
at async initAll (D:\Source\iprs-node\index.js:9:5) {
code: 'ERR_TIMEOUT'
}
Does anyone know how to fix it?
Or enlighten me, how this works?

Related

MongoDB refusing to connect using NodeJS

I have been struggling to connect to MongoDB for the past few hours now.
I have taken the code from the atlas documentation as seen below:
const { MongoClient } = require('mongodb')
const url =
'mongodb+srv://user:pass#mmmcluster.axapu.mongodb.net/appname?retryWrites=true&w=majority&useNewUrlParser=true&useUnifiedTopology=true'
const client = new MongoClient(url)
async function run() {
try {
await client.connect()
console.log('Connected correctly to server')
} catch (err) {
console.log(err.stack)
} finally {
await client.close()
}
}
run().catch(console.dir)
I cannot for the life of me figure out why I keep getting the error below:
MongoServerSelectionError: connect ETIMEDOUT 157.241.16.152:27017
at Timeout._onTimeout (E:\Documents\Github\#test\node_modules\mongodb\lib\sdam\topology.js:306:38)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
For the record:
All IP addresses are whitelisted on my database access list
I have turned off any firewalls that would be running over the internet, and whitelisted NodeJS on the windows firewall
No spaces or brackets are have been included in my password or username
The user has admin privileges on atlas
Any suggestions? I am out of ideas here and I don't want to have to install Compass as a workaround since ideally, all methods of connection should work.
I think you should take a look here:
const url =
'mongodb+srv://user:pass#mmmcluster.axapu.mongodb.net/appname?retryWrites=true&w=majority&useNewUrlParser=true&useUnifiedTopology=true'
since there is this user and pass you might want to replace them with the actual username and password for the database.

NestJS and IPFS - no connection on the server instance

I am struggling with binding IPFS node with NestJS instance on the server. All was working fine on the local machine, but on the server, I have a working instance of the IPFS. I know that it works as I can see connected peers and I can see a file uploaded through the server console by https://ipfs.io/ipfs gateway.
The code of the IPFS service is quite simple and it does not produce any errors until I try to upload something.
import { Injectable } from '#nestjs/common';
import { create } from 'ipfs-http-client';
#Injectable()
export class IPFSClientService {
private client = create({
protocol: 'http',
port: 5001
});
public async upload(file: Express.Multer.File): Promise<string> {
const fileToAdd = { path: file.filename, content: file.buffer };
try {
const addedFile = await this.client.add(fileToAdd, { pin: true });
return addedFile.path;
} catch (err) {
console.log('err', err);
}
}
}
Unfortunatelly the error message is enigmatic.
AbortController is not defined
at Client.fetch (/home/xxx_secret/node_modules/ipfs-utils/src/http.js:124:29)
at Client.fetch (/home/xxx_secret/node_modules/ipfs-http-client/cjs/src/lib/core.js:141:20)
at Client.post (/home/xxx_secret/node_modules/ipfs-utils/src/http.js:171:17)
at addAll (/home/xxx_secret/node_modules/ipfs-http-client/cjs/src/add-all.js:22:27)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at Object.last [as default] (/home/xxx_secret/node_modules/it-last/index.js:13:20)
at Object.add (/home/xxx_secret/node_modules/ipfs-http-client/cjs/src/add.js:18:14)
at IPFSClientService.upload (/home/xxx_secret/src/ipfs/ipfs-client.service.ts:20:25)
I will appreciate any help in this matter as I don't have ideas regarding this issue :/

Ytdl-core is not working properly - "Could not find player config"

This error has started occurring on songs that worked perfectly previously.
Error:
Error: Could not find player config
at exports.getBasicInfo (/app/node_modules/ytdl-core/lib/info.js:90:13)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:93:5)
at async Map.getOrSet (/app/node_modules/ytdl-at async exports.getInfo (/app/node_modules/ytdl-core/lib/info.js:215:14)
at async Map.getOrSet (/app/node_modules/ytdl-core/lib/cache.js:24:19)
at async Object.execute (/app/commands/play.js:59:20)
For some reason, if I try the same video multiple times eventually it'll work
but now, it is not working forever on any video i play.
play part:
const dispatcher = queue.connection
.play(stream, { type: streamType })
.on("finish", () => {
if (collector && !collector.ended) collector.stop();
if (queue.loop) {
let lastSong = queue.songs.shift();
queue.songs.push(lastSong);
module.exports.play(queue.songs[0], message);
} else {
// Recursively play the next song
queue.songs.shift();
module.exports.play(queue.songs[0], message);
}
})
.on("error", (err) => {
console.error(err);
queue.songs.shift();
module.exports.play(queue.songs[0], message);
});
I have a similar issue, the problem is that this is currently an ongoing issue for everybody that uses the library, check out the open issue on GitHub for progress ytdl-core

How to fix an ECONNREFUSED error when try to connect to a postgresql pool in nodejs

When I try bulk query in nodejs with promise.all I got ECONNREFUSED error after some number of query resulted in success.
I am building node server for my web application. I made some queries resulted from AJAX requests from client. They all worked fine till now.
I tried to load data from excel to some number of tables (i use pg module). I wrote a code that insert records after reading if there is not a same record. I used promise to make queries. But it made some queries and then start to get ECONNREFUSED error.
I have changed max_connections to 10000 and shared_buffers to 25000MB. And restart postgres server
I have changed max_connections to 1000 and shared_buffers to 2500MB.And restart postgres server
I have changed max_connections to 300 and shared_buffers to 2500MB. And restart postgres server
I have changed my code from POSTGRESQL Pool to POSTGRESQL Client
I have omitted some query in promise array
but nothing changed. Almost 180 records were inserted at all. Then got en error.
function loadData(auditArrayObject){
return new Promise(function (resolve, reject) {
let promises=[
loadAuditItem(auditArrayObject.audit,
auditArrayObject.auditVersion),
loadProcesses(auditArrayObject.processArray),
loadControlAims(auditArrayObject.controlAimArray),
loadCriterias(auditArrayObject.criteriaArray),
loadParameters(auditArrayObject.parameterArray),
]
Promise.all(promises)
.then(objectsWithId=>{
......
}
}
function loadProcesses(processArray){
return new Promise(function (resolve, reject) {
let promises=[];
for(let i=0;i<processArray.length;i++){
let process= new Process(null,processArray[i],false)
let promise= postGreAPI.readProcessByName(process.name)
.then(resultProcess=>{
if (!resultProcess) {
postGreAPI.createProcess(process)
.then(createdProcess=>{
resolve(createdProcess)
})
.catch(err=>{
reject({msg:"createProcess
hata aldı",err:err})
})
} else {
return (resultProcess)
}
})
.catch(err=>{
reject({msg:"readProcessByName
hata aldı",err:err})
})
promises.push(promise)
}
Promise.all(promises)
.then(processArray=>{
resolve({key:"Process",value:processArray})
})
.catch(err=>{
reject({msg:"Processlerden birisi insert edilemedi",err:err})
})
});
}
postGreAPI.readProcessByName:
var readProcessByName = function (name){
return new Promise(function (resolve, reject) {
let convertedName=convertApostrophe(name)
let query = "SELECT * FROM process WHERE name='"
+ convertedName + "'"
queryDb(query)
.then(result=>{
if (result.rows.length>0){
let process = new Process(result.rows[0].id,
result.rows[0].name,
result.rows[0].isactive);
resolve(process)
}else{
resolve(null)
}
})
.catch(err=>{
reject(err)
})
})
}
queryDb:
var queryDb2 = function (query,params) {
return new Promise(function (resolve, reject) {
let pool = new PostGre.Pool(clientConfig);
pool.connect(function(err, client, done) {
if(err) {
return reject (err);
}
client.query(query,params, function(err, result) {
done();
if(err) {
return reject (err);
}
resolve(result)
});
});
})
}
And error is :
Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5432 }
Actually i have succed that load before my laptop configuration changed. Before there was a windows 7 but now windows 10.
Your code seems to be in bits and pieces, So cannot point exact line for the error, But the error is coming because of one reason, connection pooling.
Your code is attempting to create a connection with POSTGRES every time you query the database hence for initial runs some data is entered in the database and then starts failing, you need to remove every time connecting part from the query section and use a single instance to deal with the query and also close connection once completed.
You also mentioned you have upgraded from Win 7 to Win 10, their is no problem with windows version, but your hardware might have got higher configuration too (Ram and number of cores), the way event loop works you some time don't get this error with low configuration systems but with larger config systems you get these errors.

Saving data to Postgres from AWS Lambda

I'm building a lambda function that is supposed to save a game feedback, like a performance grade, into my Postgres database, which is in AWS RDS.
I'm using NodeJS typescript and the function is kinda working, but in a strange way.
I made an API Gateway so I can POST data to the URL to the lambda process it and save it, the thing is, when I POST the data the function seems to process it until it reaches a max limit of connected clients, and than it seems to lose the other clients'data.
Another problem is that every time I POST data I get a response saying that there was a Internal Server Error and with a 'X-Cache→Error from cloudfront' header. For a GET request I figured it out that it was giving me this response because the format of the response was incorrect, but in this case I fixed the response format and still get this problem...
Sometimes I get a timeout response.
My function's code:
import { APIGatewayEvent, Callback, Context, Handler } from "aws-lambda";
import { QueryConfig, Client, Pool, PoolConfig } from "pg";
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
// context.callbackWaitsForEmptyEventLoop = false;
const config: PoolConfig = {
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DB,
password: process.env.PG_PASS,
port: parseInt(process.env.PG_PORT),
idleTimeoutMillis: 0,
max: 10000
};
const pool = new Pool(config);
let postdata = event.body || event;
console.log("POST DATA:", postdata);
if (typeof postdata == "string") {
postdata = JSON.parse(postdata);
}
let query: QueryConfig = <QueryConfig>{
name: "get_all_questions",
text:
"INSERT INTO gamefeedback (gameid, userid, presenterstars, gamestars) VALUES ($1, $2, $3, $4);",
values: [
parseInt(postdata["game_id"]),
postdata["user_id"],
parseInt(postdata["presenter_stars"]),
parseInt(postdata["game_stars"])
]
};
console.log("Before Connect");
let con = await pool.connect();
let res = await con.query(query);
console.log("res.rowCount:", res.rowCount);
if (res.rowCount != 1) {
cb(new Error("Error saving the feedback."), {
statusCode: 400,
body: JSON.stringify({
message: "Error saving data!"
})
});
}
cb(null, {
statusCode: 200,
body: JSON.stringify({
message: "Saved successfully!"
})
});
console.log("The End");
};
Than the log from CloudWatch error with max number of clients connected looks like this:
2018-08-03T15:56:04.326Z b6307573-9735-11e8-a541-950f760c0aa5 (node:1) UnhandledPromiseRejectionWarning: error: sorry, too many clients already
at u.parseE (/var/task/webpack:/node_modules/pg/lib/connection.js:553:1)
at u.parseMessage (/var/task/webpack:/node_modules/pg/lib/connection.js:378:1)
at Socket.<anonymous> (/var/task/webpack:/node_modules/pg/lib/connection.js:119:1)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:607:20)
Can any of you guys help me with this strange problem?
Thanks
Well for one thing you need to put creating a pool above the handler, like so:
const config: PoolConfig = {
user: process.env.PG_USER,
...
};
const pool = new Pool(config);
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
..etc
The way you have it you are creating a pool on every invocation. If you create the pool outside the handler it gives Lambda a chance to share the pool between invocations.

Resources