PXSelect Not Always Querying Database - acumatica

I am running a PXSelect statement that is not always querying the database when the code is hit. If it is the first run, it will query the DB, but after that it just uses the results from the previous query (which is no good). Is there any way to force PXSelect to always query the database using the BQL to SQL translated code? If so, how would I do that?
Thanks.

I believe PXSelectReadonly is what you are looking for. Use it in place of PXSelect.
Sometimes we find it necessary to clear the Query Cache which should also allow your PXSelect to work if needed. Example:
MyView.Cache.ClearQueryCache();

Related

About speedy mass deletion of users in Kentico10

I want to delete more than 1 million User information in Kentico10.
I tried to delete it with UserInfoProvider.DeleteUser (); (see the following documentation), but it is expected that it will take nearly one year with a simple calculation.
https://docs.kentico.com/api10/configuration/users#Users-Deletingauser
Because it's a simple calculation, I think it's actually a bit shorter, but it still takes time.
Is there any other way to delete users in a short time?
Of course make sure you have a backup of your database before you do any of this.
Depending on the features you're using, you could get away with a SQL statement. Due to the complexities of the references of a user to multiple other tables, the SQL statement can get pretty complex and you need to make sure you remove the other references before removing the actual user record.
I'd highly recommend an API approach and delete users through the API so it removes all the references for you automatically. In your API calls make sure you wrap the delete action in the following so it stops the logging of the events and other labor-intensive activities not needed.
using (var context = new CMSActionContext())
{
context.DisableAll();
// delete your user
}
In your code, I'd only select the top 100 or so at a time and delete them in batches. Assuming you don't need this done all in one run, you could let the scheduled task run your custom code for a week and see where you're at.
If all else fails, figure out how to delete the user and the 70+ foreign key references and you'll be golden.
Why don't you delete them with SQL query? - I believe it will be much faster.
Bulk delete functionality exist starting from version 10.
UserInfoProvider has BulkDelete method. Actually any InfoProvider object inhereted from AbstractInfoProvider has BulkDelete method.

Intercept and modify incoming SQL queries to Spark Thrift Server

I have a thrift server up and running, with users sending queries over a JDBC connection. Can I intercept and modify the queries as they come in, and then send the result of the modified query back to the user?
For example - I want the user to be able to send the query
SELECT * FROM table_x WHERE pid="123";
And have the query modified to
SELECT * FROM table_y WHERE pid="123";
and the results of the second query should be returned. This should be transparent to the user.
SparkExecuteStatementOperation and SparkSession is what we thought we would add our code. I am using (yet to go in prod) a simple rule based on some external policy, I change the name of table to a view in the SQL before passing ahead. Its a bit hacky though.
There is no way to change the query in Spark Thrift Server.You can used other way to change the query before your Jdbc/odbc driver.Which takes several operation on it in complex query.You can use string modification in simple query.Only a table name change is easy but Parsing the query and modify the complex query is not easy.
You could use a database proxy to rewrite the queries as needed before they hit the database(s).
I'm not sure if it makes sense in your particular situation, but if it does, take a look at Gallium Data, that's a common use case.

Weird issue with Azure SQL Database v12: the database is always slow on the first insert or delete execution, but not with V11

We are using MVC4, ASP.NET 4.5, Entity Framework 6.
When we used Azure SQL Database v11, initial record inserts and deletes via EF, worked fine and quickly. However now, on v12, I notice that initial inserts and deletes can be very slow, especially if we choose a new value when inserting. If we insert a new record with the same value, the response is rapid. The delay I am talking about can be about 30 on S1, 15 secs on S2, 7 secs on S3.
As I say, we never encountered this on v11.
Any ideas gratefully received.
EDIT1
Just been doing some diagnostics and it seems that a view that I was using now runs very slowly first time:
db.ExecuteStoreCommand("DELETE FROM Vw_Widget where Id={0}", ID);
Do I need to rejig views in anyway for Azure SQL Database v12?
EDIT2
Looking at the Code a little more I see that I have added a delete trigger to the View, so basically I have set up a view so I can use this trigger code in certain situations. I am now trying to take out the trigger code and run it from the app, which does run alot quicker. Perhaps this code should be a stored procedure.
Definitely you need to do some diagnostics for your view to check the performance of your query and you may need to tune your query. The time measures you are saying is so high to perform any operation. Please make sure to do insert or deletes on your target tables and not views. The best practice is not to use views to insert or delete.
You can use views only in select statements.
I had a similar problem when make a migration of sql database v2 to v12. Actually i was working with business model and I tried to migrate to S0. The performance of the DB was not good. After sometime i discover that dtu model has particular views to monitor what type of provison model do you need. If is on the first time the problem, probably your application are making a lot of queries to load data in memory and these can be affecting the performance of your CRUD statement.
SELECT end_time
, (SELECT Max(v)
FROM (VALUES (avg_cpu_percent)
, (avg_data_io_percent)
, (avg_log_write_percent)
) AS value(v)) AS [avg_DTU_percent]
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC;
more information about that, can be found on these page:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-upgrade-server-portal/

Entire Document or Selected Fields only Performance in Mongoose

I have been thinking on how I can make my app in NodeJS to go faster, so I have tried querying for only some fields and the entire document, because at MongoDB Documentation says its faster to query for certain fields. The problem is it's seems to me incorrect, where am I failing? Here is the code I am using I have made it to save to csv to get a Chart from Libreoffice:
http://pastebin.com/G8KRRY3n
First Option (A) is get the entire Document.
Second Option (B) is get some fields.
Here is the graph I toke from it (Every operation in miliseconds):
http://prntscr.com/5oofoz
I process almost 9500 users. As you can see, at first (0~200) items procesed, It's the same, but then the second options start to grow in time... I have tried to switch the order of the options because of the garbage collector has something to do, but the results are almost the same.
Yes, the first option is faster at first elements, So the question is... In a High Traffic webapp which option is the recomended? Why? I am newbie at performance field so I am pretty sure I'm doing something wrong...

How can I explain() an upsert in MongoDB to see if indexes are used?

When code changes, a quick way that tells me if indexes are still proper for find() statements is (nodejs)
collection.find(query).explain(function(err, explaination) {
console.log('MongoDebug: ' + explaination.cursor);
});
If the cursor is of type BtreeCursor, indexes are used.
How do I check this when using insert() with upsert: true?
explain() is a function on the cursor and is not available on inserts. There's also a $explain query modifier, but it's still a query modifier.
However, there's a big load of work filed as explain 2.0, one of the subtasks is to provide explain() for updates - SERVER-14101. That's listed as fixed in version 2.7.7.
As a note, performing explain for every operation might be a bad idea, because it forces MongoDB to reevaluate query plans all the time, thereby increasing the server load on the database.
You can use the integrated profiler and db.currentOp() to analyze performance of non-query operations for now, but the insights are limited. Try a simple find().explain() for manual optimization, the indexes used should be the same.

Resources