I have an application which is hosted in DC/OS instance, The application query the snowflake database and get the result. I am using snowflake sdk to to query the snowflake data base, we are also streaming the result we are getting from snowflake.
var statement = connection.execute({
sqlText: sql,
complete: function (err, stmt, rows) {
var stream = stmt.streamRows();
callback(err, stream, response);
}}
But if querying is large and the processing of query takes time in snowflake , I get 504 gateway timeout error at my client.although the node service is still running , but suppose I am hitting DC/OS from browser/postman I will get 504 timeout error here but snowflake returns result to my node service. What is the right strategy to avoid it ?
this is the error which I am getting at my client from the server though my node service still maintains connection with snowflake and get the result from snowflake.
Can you check what your statement timeout is set to?
Can you try out the following :
https://docs.snowflake.com/en/sql-reference/sql/alter-user.html
# set timeout to 15 minutes
alter user USERNAME set STATEMENT_TIMEOUT_IN_SECONDS = 900;
https://docs.snowflake.com/en/sql-reference/sql/alter-session.html
STATEMENT_TIMEOUT_IN_SECONDS =
Examples
Set the lock timeout for statements executed in the session to 1 hour (3600 seconds):
alter session set STATEMENT_TIMEOUT_IN_SECONDS = 3600;
Set the lock timeout for statements executed in the session back to the default:
alter session unset STATEMENT_TIMEOUT_IN_SECONDS;
Have you reached out to snowflake support?
Related
I have a nestjs scheduler which will run every one hour
I'm using multiple library to connect to postgres database through nestjs app
prisma
Knex
I have scheduler table that will have url to run on what datetime
& a rule table that will have tablename, columnname, logicaloperator(i.e >,<,=,!=) & conditional operator(AND, OR)
knex will create a query that is stored in database
for(const t of schedules) {
//this wont stop and will make call simultanously to url
fetch("url").catch()
}
the url will insert records it will take 1, 2, 3 hrs depending on the url
but after certain time
i'm getting Timed out fetching a new connection from the connection pool prisma error
is it because i'm using multiple client to connect database?
You can configure the connection_limit and pool_timeout parameters while passing them in the connection string. You can set the connection_limit to 1 to make sure that prisma doesn't initiate new database connections, this way you won't get timeout errors.
Increasing the pool timeout would give the query engine more time to process queries in the queue.
Reference for connection_limit and pool_timeout parameters: Reference.
I have an Azure Function App and Azure SQL DB.
In my Azure Function I query data (Using EF Core 3) and the following code:
var stuffIWant = dbContext.UsageCounts
.Where(a=> a.Elo > 0 && a.Elo < 1300)
.GroupBy(a => a.TheHash)
.Select(b => new stuff { Hash = (long)b.Key, Count = b.Count() })
.OrderByDescending(a => a.Count).Take(10).ToList();
I am getting a very high failure rate with an error that looks like:
[Error] Executed 'FunctionName' (Failed, Id=456456-0040-4349-81e3-54646546, Duration=30220ms)The wait operation timed out.
The exception:
[Error] Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
When I execute the function it sometimes (rarely) works fine and is able to make 8 queries like this (with the bounds for Elo changing) and each takes <200ms to complete.
I can also run this in a Sandbox project on my local machine, connecting to the same Azure DB using EF with the same model and can run it hundreds of times without ever timing out, each query taking <200ms to complete.
When the Azure function does work it always goes through each of the 8 queries and returns the data, when it doesn't work it always fails at the first query.
I added a "test" query to my function:
var test = dbContext.UsageCounts.Where(a => a.Elo > 2200).Take(10).ToList();
This happens before my failing query, and always succeeds.
Why is my function timing out most of the time when this query is nowhere near the execution time limit?
My database is not set to auto-pause.
My compute and IO utilization is under 20%
To resolve this [Error] Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding., you can try following ways:
In Tools->Options->Query Execution->SQL Server->General->Execution time-out. Set the time-out to more than 30 seconds or 0 to wait for the indefinite time.
Check connection string, e.g. connectionString="Data Source=.;Initial Catalog=LMS;User ID=test;Password=test#123"
You can refer to The wait operation timed out establishing a connection to SQL Server FROM Azure Function App, Azure Function Execution Timeout Expired and Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance
I am running an application that will connect an azure SQL database. I am running into a problem that the application will connect to the database in multiple threads, thus causing race conditions. I want to run them sequentially.
I can't change the application, is there a way to set the database to allow a single connection only, if it doesn't then wait until it connects?
Single user mode is not available on Azure SQL Database.
However, maybe you should consider adding a retry logic on the application so it can retry X times and the interval between retries can be increased after a retry fails.
public void HandleTransients()
{
var connStr = "some database";
var _policy = RetryPolicy.Create < SqlAzureTransientErrorDetectionStrategy(
retryCount: 3,
retryInterval: TimeSpan.FromSeconds(5));
using (var conn = new ReliableSqlConnection(connStr, _policy))
{
// Do SQL stuff here.
}
}
I have a Glue job script that does this (not showing imports and setup here) and it inserts the row into SQL Server RDS just fine:
columns = ['test']
vals = [("test")]
df = sqlContext.createDataFrame(vals, columns)
test = DynamicFrame.fromDF(df, glueContext, "test")
datasink = glueContext.write_dynamic_frame.from_catalog(frame = test,
database = "database-name", table_name = "table-name")
job.commit()
When I run with this same connection but for a larger test load (ends up being about 100 rows) I get this error:
An error occurred while calling o596.pyWriteDynamicFrame. The TCP/IP connection to the host , port 1433 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall
The thing is that I know there's no firewall or security group issue since one row inserts just fine. I've tried adding a loginTimeout parameter to the JDBC connection like so:
jdbc:sqlserver://<host>:<port>;databaseName=dbName;loginTimeout=600;
As it indicates you can do so here. But the connection fails with Glue when I do that but succeeds when I remove the loginTimeout parameter.
I've also checked the remote timeout configuration on my SQL Server instance and it shows as 600 seconds which is longer than any of my failed jobs so it couldn't be that.
How can I get around this connection timeout error? It seems to be a limitation built into Glue.
In order to do a JDBC connection with Glue you need to follow the steps in this documentation: https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
We had done that but it turns out that our self-referencing sec group wasn't actually self-referencing. Once we changed that it got resolved
I also had to create the connection as an Amazon RDS connection and not as a JDBC connection even though it's doing the same thing under the hood.
Even after doing all that I still had issues. Turns out that you need to add the sql connection specifically to the job outside of the script. If you hit "Edit Job" you'll see a list of sql connections there. If the connection you're trying to hit isn't on the list of required connections you will always timeout
I'm using a Logic App where the workflow calls at a certain point an Azure Function using the Webhook URL (as a workaround to Azure Functions Durable).
The goal of this function is to insert/update data into an Azure SQL Database with a SQL request
"MERGE INTO...USING...WHEN NOT MATCHED...WHEN MATCHED AND...".
In the logs of the Azure Function, i could see it failed and it seems to run 4 times (maybe due to the supposed Timeout, I don't know), but I don't understand since I increased the CommandTimeout to 50minutes and I set 1Hour to the Timeout of the action "Launch Webhook" in the LogicApp :S Here's the sample of the exception logged in the Azure Function :
Exception while executing function: XmlImport_DoWork
Microsoft.Azure.WebJobs.Host.FunctionInvocationException : Exception while executing function: XmlImport_DoWork ---> System.Data.SqlClient.SqlException : Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The statement has been terminated. ---> System.ComponentModel.Win32Exception : The wait operation timed out
The table actually have around 250,000 lines and it seems to be good when I launch the LogicApp (and so the Azure Function) to a table which is almost empty !
Any ideas about what's going on and how to fix it ? I tried to look at the "Query Performance Insight" in Azure SQL database component but there are nothing in "Recommendations" section
The Function App where are stored my Azure Functions is using an App Service Plan.
BTW the XML file I was trying to import in DB has a size of 20M but I tried with a lighter XML (9M) but it didn't work either
Azure Durable Function: V2 and .Net Core 2.2 - Timeout expired issue RESOLOVED
The activity function 'A_ValidateAndImportData' failed: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.". See the function execution logs for additional details.
Using DAPPER to call SQL Server stored procedure: Dapper not honoring "Connection Timeout" Property in the connection string
Solution: Use a connection timeout parameter to provide "0"(ZERO or increase timeout according to your need) to solve this problem
Example Code:
public async Task<int> ValidateAndImportData(string connectionString, int param1,
int databaseTimeOut = 0)
{
using (var connection = new SqlConnection(connectionString))
{
var param = new DynamicParameters();
param.Add("#param1", param1);
param.Add("#returnStatus", dbType: DbType.Int32, direction: ParameterDirection.Output);
await connection.ExecuteAsync("[dbo].[ValidateAndImportData]", param,
commandType: CommandType.StoredProcedure, commandTimeout: databaseTimeOut).ConfigureAwait(false);
return param.Get<int>("returnStatus");
}
}