Timed out fetching a new connection from the connection pool prisma - nestjs

I have a nestjs scheduler which will run every one hour
I'm using multiple library to connect to postgres database through nestjs app
prisma
Knex
I have scheduler table that will have url to run on what datetime
& a rule table that will have tablename, columnname, logicaloperator(i.e >,<,=,!=) & conditional operator(AND, OR)
knex will create a query that is stored in database
for(const t of schedules) {
//this wont stop and will make call simultanously to url
fetch("url").catch()
}
the url will insert records it will take 1, 2, 3 hrs depending on the url
but after certain time
i'm getting Timed out fetching a new connection from the connection pool prisma error
is it because i'm using multiple client to connect database?

You can configure the connection_limit and pool_timeout parameters while passing them in the connection string. You can set the connection_limit to 1 to make sure that prisma doesn't initiate new database connections, this way you won't get timeout errors.
Increasing the pool timeout would give the query engine more time to process queries in the queue.
Reference for connection_limit and pool_timeout parameters: Reference.

Related

Azure SQL serverless is not waking up on connection attempt

I'm testing Azure SQL Serverless and from SSMS it seems to work fine, but from my ASP.NET Core application it never wakes up.
Using SSMS I can open a connection to a sleeping Serverless SQL database and after a delay the connection will go through.
Using my ASP.NET Core application I tried the same. From the login page I tried to login, which opens a connection to the database. After 10 or 11 seconds (I looked up the default timeout and its supposed to be 15 seconds but in this case it always seems to be about 10.5 seconds +/-0.5s). According to the docs, the first connection attempt may fail but subsequent ones should succeed, but I can send multiple queries to the database and it always fails with the following error:
Microsoft.Data.SqlClient.SqlException (0x80131904): Database 'myDb' on server
'MyDbSvr.database.windows.net' is not currently available. Please retry the connection later. If the
problem persists, contact customer support, and provide them the session tracing ID of
'{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}'.
If I wake the database up using SSMS then the login web page can connect to the database and succeeds.
I have added Connect Timeout=120; to the connection string.
The connection does happen during an HTTP request that is marked async on the Controller, thought I don't know if that makes any difference.
Am I doing something wrong or is there something additional I need to do to get the DB to wake?
[updte]
as an extra test wrote the following test
void Main()
{
SqlConnection con = new SqlConnection("Server=mydbsvr.database.windows.net;Database=mydb;User Id=abc;Password=xyz;Connect Timeout=120;");
Console.WriteLine(con.ConnectionTimeout);
con.Open();
var cmd = con.CreateCommand();
cmd.CommandText = "select getdate();";
Console.WriteLine(cmd.ExecuteScalar());
}
and got the same error.
I figured it out and its the dumbest thing.
This Azure SQL Server instance was migrated from another subscription and the group that migrated it gave it a new name, but they did something that allowed the use of the old name also. I'm researching to figure out how that was done. I will update this answer when I find out what that was.
As it turns out, using the old name with an Serverless Database won't wake up the db. Don't know why. But if you change to use the new/real server name it works. you do have to add a retry to the connection as it may fail the first few times.
[Update]
The new server allows logins using the old name by using a Azure SQL Database Alias https://learn.microsoft.com/en-us/azure/sql-database/dns-alias-overview

One row test insertion to SQL Server RDS works but full load times out

I have a Glue job script that does this (not showing imports and setup here) and it inserts the row into SQL Server RDS just fine:
columns = ['test']
vals = [("test")]
df = sqlContext.createDataFrame(vals, columns)
test = DynamicFrame.fromDF(df, glueContext, "test")
datasink = glueContext.write_dynamic_frame.from_catalog(frame = test,
database = "database-name", table_name = "table-name")
job.commit()
When I run with this same connection but for a larger test load (ends up being about 100 rows) I get this error:
An error occurred while calling o596.pyWriteDynamicFrame. The TCP/IP connection to the host , port 1433 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall
The thing is that I know there's no firewall or security group issue since one row inserts just fine. I've tried adding a loginTimeout parameter to the JDBC connection like so:
jdbc:sqlserver://<host>:<port>;databaseName=dbName;loginTimeout=600;
As it indicates you can do so here. But the connection fails with Glue when I do that but succeeds when I remove the loginTimeout parameter.
I've also checked the remote timeout configuration on my SQL Server instance and it shows as 600 seconds which is longer than any of my failed jobs so it couldn't be that.
How can I get around this connection timeout error? It seems to be a limitation built into Glue.
In order to do a JDBC connection with Glue you need to follow the steps in this documentation: https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
We had done that but it turns out that our self-referencing sec group wasn't actually self-referencing. Once we changed that it got resolved
I also had to create the connection as an Amazon RDS connection and not as a JDBC connection even though it's doing the same thing under the hood.
Even after doing all that I still had issues. Turns out that you need to add the sql connection specifically to the job outside of the script. If you hit "Edit Job" you'll see a list of sql connections there. If the connection you're trying to hit isn't on the list of required connections you will always timeout

504 gateway timeout error NodeJs

I have an application which is hosted in DC/OS instance, The application query the snowflake database and get the result. I am using snowflake sdk to to query the snowflake data base, we are also streaming the result we are getting from snowflake.
var statement = connection.execute({
sqlText: sql,
complete: function (err, stmt, rows) {
var stream = stmt.streamRows();
callback(err, stream, response);
}}
But if querying is large and the processing of query takes time in snowflake , I get 504 gateway timeout error at my client.although the node service is still running , but suppose I am hitting DC/OS from browser/postman I will get 504 timeout error here but snowflake returns result to my node service. What is the right strategy to avoid it ?
this is the error which I am getting at my client from the server though my node service still maintains connection with snowflake and get the result from snowflake.
Can you check what your statement timeout is set to?
Can you try out the following :
https://docs.snowflake.com/en/sql-reference/sql/alter-user.html
# set timeout to 15 minutes
alter user USERNAME set STATEMENT_TIMEOUT_IN_SECONDS = 900;
https://docs.snowflake.com/en/sql-reference/sql/alter-session.html
STATEMENT_TIMEOUT_IN_SECONDS =
Examples
Set the lock timeout for statements executed in the session to 1 hour (3600 seconds):
alter session set STATEMENT_TIMEOUT_IN_SECONDS = 3600;
Set the lock timeout for statements executed in the session back to the default:
alter session unset STATEMENT_TIMEOUT_IN_SECONDS;
Have you reached out to snowflake support?

duplicate data insert on multi containers that runs the same service

I have a problem with service replica that process the same data .
my code get data from a socket and then insert that data to db. the problem is that if I have 2 containers of the same service (and I have more then 2) they both insert the same and I get duplicated data on my db.
Is there a way to tell only one of them to do the insert ?
I'm using docker and kubernetes but I'm still new with them
function dataStream(data) { // get the data from the socket
const payload = formatPayload(lines);
addToDb(payload); // I want this to happen only from 1 service
broadcast(payload)
}

NodeJS/Express: ECONNRESET when doing multiples requests using Sequelize/Epilogue

I'm building a webapp using the following the architecture:
a postgresql database (called DB),
a NodeJS service (called DBService) using Sequelize to manipulate the DB and Epilogue to expose a REST interface via Express,
a NodeJS service called Backend serving as a backend and using DBService threw REST calls
an AngularJS website called Frontend using Backend
Here are the version I'm using:
PostgreSQL 9.3
Sequelize 2.0.4
Epilogue 0.5.2
Express 4.13.3
My DB schema is quite complex containing 36 tables and some of them contains few hundreds of records. The DB is not meant to write data very often, but mostly to read them.
But recently I created a script in Backend to make a complete check up of datas contained inside the DB: basically this script retrieve all datas of all tables and do some basic checks on datas. Currently the script only does reading on database.
In order to achieve my script I had to remove the pagination limit of Epilogue by using the option pagination: false (see https://github.com/dchester/epilogue#pagination).
But now when I launch my script I randomly obtained that kind of error:
The request failed when trying to retrieve a uniquely associated objects with URL:http://localhost:3000/CallTypes/178/RendererThemes.
Code : -1
Message : Error: connect ECONNRESET 127.0.0.1:3000
The error randomly appears during the script execution: then it's not always this URL which is returned, and even not always the same tables or relations. The error message before code is a custom message returned by Backend.
The URL is a reference to the DBService but I don't see any error in it, even using logging: console.log in Sequelize and DEBUG=express:* to see what happens in Express.
I tried to put some setTimeout in my Backend script to slow it, without real change. I also tried to manipulate different values like PostgreSQL max_connections limit (I set the limit to 1000 connections), or Sequelize maxConcurrentQueries and pool values, but without success yet.
I did not find where I can customize the pool connection of Express, maybe it should do the trick.
I assume that the error comes from DBService, from the Express configuration or somewhere in the configuration of the DB (either in Sequelize/Epilogue or even in the postgreSQL server itself), but as I did not see any error in any log I'm not sure.
Any idea to help me solve it?
EDIT
After further investigation I may have found the answer which is very similar to How to avoid a NodeJS ECONNRESET error?
: I'm using my own object RestClient to do my http request and this object was built as a singleton with this method:
var NodeRestClient : any = require('node-rest-client').Client;
...
static getClient() {
if(RestClient.client == null) {
RestClient.client = new NodeRestClient();
}
return RestClient.client;
}
Then I was always using the same object to do all my requests and when the process was too fast, it created collisions... So I just removed the test if(RestClient.client == null) and for now it seems to work.
If there is a better way to manage that, by closing request or managing a pool feel free to contribute :)

Resources