I am using a big stored procedure which is using many linked server queries. If i run this stored procedure manually it runs fine but if i call this stored procedure with exe using mufti-threading, it is raising "Cannot get the data of the row from the OLE DB provider "SQLNCLI11" for linked server "linkedserver1". and "Row handle referred to a deleted row or a row marked for deletion." for each execution. Performance of stored procedure is also very slow in comparison of same stored procedure without linked server queries. Please provide me some tips to improve performance of stored procedure and fix the issue mentioned above.
Thanks
If you are querying over linked servers, you will see a decrease in performance. Could it be possible that the procedures are affecting the same results - therefore giving you exceptions? If so you might be looking at dirty reads. Is that OK for your result set?
From the looks of it you seem to have to call the procedures sequentially and not in parallel. What you can do is cache the data on a server, and sync the updates etc, in batches.
Related
We use node-mssql and we're trying to send an array of data to a stored procedure.
Unlikely the TVP which seems a little bit complex, we found this bulk method which is very interesting, but all the examples that we found create a new table instead of pushing data to a stored procedure.
Is there a way to use it to get the bulk results in a stored procedure?
Our SQL Server version is 2012. Really appreciate any help in advance.
If table valued parameters, seems very complex to pass from node.js, alternatively, you can have a staging table and bulk load data into the staging table and you can process the bulkload table inside the stored procedure.
Follow below steps:
BULK LOAD INTO Staging Table
Process the staging table inside the stored procedure
Clean up the Staging table inside the stored procedure, once you are done with the processing.
I have recently deployed PostgreSQL database on Linux server.
One of the stored procedure is taking around 20 to 24second.I have executed same stored procedure in blank database as well(no any row return) and it is taking same time. I found that slowness occurs because of aggregate function.
Here if i removed function ARRAY_TO_JSON(ARRAY_AGG( then result will be fetch within second.
Below is my code snippet:
SELECT ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(A))) FROM (
select billservice.billheaderid,billservice.billserviceid AS billserviceid,.....(around 120 columns in select ).....
)A;
Explain Execution Plan:
Previously i was deployed PostgreSQL database to windows server and the same stored procedure is taking around only 1 to 1.5 second.
In both cases i have tested with same database with same amount of data. and also both server have same configuration like RAM, Processor. And also have same PostgreSQL configuration.
While executing my stored procedure in Linux server CPU usages goes to 100%.
Let me know if you have any solution for the same.
I am currently working within a Sybase ASE 15.7 server and need a dependable way to get all of the stored procedures which are dependent upon a particular column. The Sybase system procedure sp_depends is notoriously unreliable for this. I was wondering if anyone out there had a more accurate way to discover these dependencies.
Apparently, the IDs of the columns are supposed to be stored in a bitmap in the varbinary column sysdepends.columns. However, I have not yet found a bitmask which has been effective in decoding these column IDs.
Thanks!
a tedious solution could be to parse the SP code in the system table syscomments to retrieve the tables.
A partial solution might be to run sp_recompile on all relevant tables, then watch master..monCachedProcedures to see changes in the CompileDate. Note that the CompileDate will only change once the stored proc has been executed after the sp_recompile (it actually gets compiled on 1st execution)
This would at least give you an idea of stored procedures that are in use, that are dependent on the specified table.
Not exactly elegant...
I'm using an Azure function like a scheduled job, using the cron timer. At a specific time each morning it calls a stored procedure.
The function is now taking 4 mins to run a stored procedure that takes a few seconds to run in SSMS. This time is increasing despite efforts to successfully improve the speed of the stored procedure.
The function is not doing anything intensive.
using (SqlConnection conn = new SqlConnection(str))
{
conn.Open();
using (var cmd = new SqlCommand("Stored Proc Here", conn) { CommandType = CommandType.StoredProcedure, CommandTimeout = 600})
{
cmd.Parameters.Add("#Param1", SqlDbType.DateTime2).Value = DateTime.Today.AddDays(-30);
cmd.Parameters.Add("#Param2", SqlDbType.DateTime2).Value = DateTime.Today;
var result = cmd.ExecuteNonQuery();
}
}
I've checked and the database is not under load with another process when the stored procedure is running.
Is there anything I can do to speed up the Azure function? Or any approaches to finding out why it's so slow?
UPDATE.
I don't believe Azure functions is at fault, the issue seems to be with SQL Server.
I eventually ran the production SP and had a look at the execution plan. I noticed that the statistic were way out, for example a join expected the number of returned rows to be 20, but actual figure was closer to 800k.
The solution for my issue was to update the statistic on a specific table each week.
Regarding why that stats were out so much, well the client does a batch update each night and inserts several hundred thousand rows. I can only assume this affected the stats and it's cumulative, so it seems to get worse with time.
Please be careful adding with recompile hints. Often compilation is far more expensive than execution for a given simple query, meaning that you may not get decent perf for all apps with this approach.
There are different possible reasons for your experience. One common reason for this kind of scenario is that you got different query plans in the app vs ssms paths. This can happen for various reasons (I will summarize below). You can determine if you are getting different plans by using the query store (which records summary data about queries, plans, and runtime stats). Please review a summary of it here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
You need a recent ssms to get the ui, though you can use direct queries from any tds client.
Now for a summary of some possible reasons:
One possible reason for plan differences is set options. These are different environment variables for a query such as enabling ansi nulls on or off. Each different setting could change the plan choice and thus perf. Unfortunately the defaults for different language drivers differ (historical artifacts from when each was built - hard to change now without breaking apps). You can review the query store to see if there are different “context settings” (each unique combination of set options is a unique context settings in query store). Each different set implies different possible plans and thus potential perf changes.
The second major reason for plan changes like you explain in your post is parameter sniffing. Depending on the scope of compilation (example: inside a sproc vs as hoc query text) sql will sometimes look at the current parameter value during compilation to infer the frequency of the common value in future executions. Instead of ignoring the value and just using a default frequency, using a specific value can generate a plan that is optimal for a single value (or set of values) but potentially slower for values outside that set. You can see this in the query plan choice in the query store as well btw.
There are other possible reasons for performance differences beyond what I mentioned. Sometimes there are perf differences when running in mars mode vs not in the client. There may be differences in how you call the client drivers that impact perf beyond this.
I hope this gives you a few tools to debug possible reasons for the difference. Good luck!
For a project I worked on we ran into the same thing. Its not a function issue but a sql server issue. For us we were updating sprocs during development and it turns out that per execution plan, sql server will cache certain routes/indexes (layman explanation) and that gets out of sync for the new sproc.
We resolved it by specifying WITH (RECOMPILE) at the end of the sproc and the API call and SSMS had the same timings.
Once the system is settled, that statement can and should be removed.
Search on slow sproc fast ssms etc to find others who have run into this situation.
I have a very small database (50MB) and I'm on a basic plan. There will be only a single user, but we need to create many databases (always one per user) since they will be used for training purposes. Each database is created by doing the following statement:
CREATE DATABASE Training1 AS COPY OF ModelDatabase1
We seem to be getting very very slow performance when we first query this database, afterwards it seems acceptable.
To give you an idea: we have a SP: StartupEvents that runs when the application is started. This query takes 25 seconds to run the first time. This seems incredible since the database is very small, and the tables the query calls don't contain many records. If we run this procedure afterwards it executes immediately...
How can we avoid this?