How to set timeout for NHibernate LINQ statement - c#-4.0

I am using Fluent NHibernate for my ORM. In doing so I am trying to use the NHibernate LINQ syntax to fetch a set of data with the power of LINQ. The code I have works and executes correctly with the exception being that a timeout is thrown if it takes longer than roughly 30 seconds to run. The question I have is how do I extend the default 30 second timeout for LINQ statements via NHibernate?
I have already seen the posts here, here, and here but the first two refer to setting the DataContext's Timeout property, which does not apply here, and the third refers to setting the timeout in XML, which also does not apply because I am using Fluent NHibernate to generate the XML on the fly. Not only that but the post is 2 years old and Fluent NHibernate has changed since.
With the ICriteria objects and even HQL I can specify the timeout, however that is not the goal here. I would like to know how to set that same timeout and use LINQ.
Example code:
using (var session = SessionFactory.OpenSession())
using (var transaction = session.BeginTransaction())
{
var query = (from mem in session.Query<Member>()
select mem);
query = query.Where({where statement});
int start = (currentPage - 1) * max);
if (start > 0)
query = query.Skip(start).Take(max);
else
query = query.Take(max);
var list = query.ToList();
transaction.Commit();
return list;
}
This code (where statement does not matter) works for all purposes except where a timeout occurs.
Any help is appreciated. Thanks in advance!

I ended up setting the command timeout for the Configuration for Fluent NHibernate. The downside to this is that it sets the timeout for ALL of my data access calls and not just the one.
Example code:
.ExposeConfiguration(c => c.SetProperty("command_timeout", (TimeSpan.FromMinutes(10).TotalSeconds).ToString()))
I found this suggestion from this website.

Nhibernate has extended the IQueryable and added a few methods https://github.com/nhibernate/nhibernate-core/blob/master/src/NHibernate/Linq/LinqExtensionMethods.cs
var query = (from c in Session.Query<Puppy>()).Timeout(12);
or
var query = (from c in Session.Query<Puppy>());
query.Timeout(456);

I've just spent fair amount of time fighting with this and hopefully this will save someone else some time.
You should use the .Timeout(120) method call at the very last moment to make sure it is used. TBH I'm not 100% sure on why this is but here are some examples:
WILL WORK
query = query.Where(x => x.Id = 123);
var result = query.Timeout(120).ToList();
DOESN'T WORK
query.Timeout(120);
query = query.Where(x => x.Id = 123);
var result = query.ToList();
If done like the second (DOESN'T WORK) example, it seems to fall back to the default System.Transaction.TransactionManager.DefaultTimeout.

Just in case anyone is still looking for this and finds this old thread too...
Query.Timeout is deprecated.
You should use WithOptions instead:
.WithOptions(o => o.SetTimeout(databaseTimeoutInSeconds))

Related

How does one access an Extension to a table in Acumatica Business Logic

Apologies if this question has been answered elsewhere, I have had trouble finding any resources on this.
The scenario is this. I have created a custom field in the Tax Preferences screen called Usrapikey.
This value holds an api key for a call that gets done in some custom business logic.
The custom business logic however occurs on the Taxes screen.
So within the event handler I need to access that API key value from the other screen. I have tried instantiating graphs and using linq and bql but to no avail.
below is what I have currently and my error is: No overload for method 'GetExtension' takes 1 arguments
If I am going about this the wrong way please let me know if there is a more civilized way to do this
protected virtual void _(Events.FieldUpdated<TaxRev, TaxRev.startDate> e)
{
var setup = PXGraph.CreateInstance<TXSetupMaint>();
var TXSetupEX = setup.GetExtension<PX.Objects.TX.TXSetupExt>(setup);
var rateObj = GetValues(e.Row.TaxID, TXSetupEX.Usrapikey);
decimal rate;
var tryRate = (Decimal.TryParse(rateObj.rate.combined_rate, out rate));
row.TaxRate = (decimal)rate * (decimal)100;
row.TaxBucketID = 1;
}
Many Thanks!
Well, acumatica support got back. It seems if you want to access the base page use:
TXSetup txsetup = PXSetup<TXSetup>.Select(Base);
To get the extension use:
TXSetupExt rowExt = PXCache<TXSetup>.GetExtension<TXSetupExt>(txsetup);
Then you can access the extension fields like so:
var foo = rowExt.Usrfield;

How to retrieve all documents in couchdb database without causing out of memory

I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}

Astyanax parepared statement withvalues() not working properly

today I migrated to Astyanax 1.56.42 and discovered, that withValues() on prepared statements doesn't seem to work properly with SQL SELECT...WHERE...IN ().
ArrayList<ByteBuffer> uids = new ArrayList<ByteBuffer>(fileUids.size());
for (int i = 0; i < fileUids.size(); i++) {
uids.add(ByteBuffer.wrap(UUIDtoByteArray(fileUids.get(i)), 0, 16));
}
result = KEYSPACE.prepareQuery(CF_FILESYSTEM)
.withCql("SELECT * from files where file_uid in (?);")
.asPreparedStatement()
.withValues(uids)
.execute();
If my ArrayList contains more than one entry, this results in error
SEVERE: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=hostname(hostname):9160, latency=5(5), attempts=1]InvalidRequestException(why:there were 1 markers(?) in CQL but 2 bound variables)
What am I doing wrong? Is there any other way to handle a SQL SELECT...WHERE...IN () - statement or did I find a bug?
Best regards
Chris
As you mentioned because you are supplying a collection (ArrayList) to a single ? Astyanax throws an exception. I think you need to add a ? for each element you want to have inside the IN clause.
Say you want to have 2 ints stored in an ArrayList called arrayListObj the where clause, your statement looks like this:
SELECT & FROM users WHERE somevalue IN (arrayListObj);
Because you are suppling a collection, this cant work, so you will need multiple ?'s. I.e. you want :
SELECT name, occupation FROM users WHERE userid IN (arrayListObj.get(0), arrayListObj.get(1));
I couldn't find anything on the Astyanax wiki about using the IN clause with prepared statements.

How to maintain counters with LinqToObjects?

I have the following c# code:
private XElement BuildXmlBlob(string id, Part part, out int counter)
{
// return some unique xml particular to the parameters passed
// remember to increment the counter also before returning.
}
Which is called by:
var counter = 0;
result.AddRange(from rec in listOfRecordings
from par in rec.Parts
let id = GetId("mods", rec.CKey + par.UniqueId)
select BuildXmlBlob(id, par, counter));
Above code samples are symbolic of what I am trying to achieve.
According to the Eric Lippert, the out keyword and linq does not mix. OK fair enough but can someone help me refactor the above so it does work? A colleague at work mentioned accumulator and aggregate functions but I am novice to Linq and my google searches were bearing any real fruit so I thought I would ask here :).
To Clarify:
I am counting the number of parts I might have which could be any number of them each time the code is called. So every time the BuildXmlBlob() method is called, the resulting xml produced will have a unique element in there denoting the 'partNumber'.
So if the counter is currently on 7, that means we are processing 7th part so far!! That means XML returned from BuildXmlBlob() will have the counter value embedded in there somewhere. That's why I need it somehow to be passed and incremented every time the BuildXmlBlob() is called per run through.
If you want to keep this purely in LINQ and you need to maintain a running count for use within your queries, the cleanest way to do so would be to make use of the Select() overloads that includes the index in the query to get the current index.
In this case, it would be cleaner to do a query which collects the inputs first, then use the overload to do the projection.
var inputs =
from recording in listOfRecordings
from part in recording.Parts
select new
{
Id = GetId("mods", recording.CKey + part.UniqueId),
Part = part,
};
result.AddRange(inputs.Select((x, i) => BuildXmlBlob(x.Id, x.Part, i)));
Then you wouldn't need to use the out/ref parameter.
XElement BuildXmlBlob(string id, Part part, int counter)
{
// implementation
}
Below is what I managed to figure out on my own:.
result.AddRange(listOfRecordings.SelectMany(rec => rec.Parts, (rec, par) => new {rec, par})
.Select(#t => new
{
#t,
Id = GetStructMapItemId("mods", #t.rec.CKey + #t.par.UniqueId)
})
.Select((#t, i) => BuildPartsDmdSec(#t.Id, #t.#t.par, i)));
I used resharper to convert it into a method chain which constructed the basics for what I needed and then i simply tacked on the select statement right at the end.

CouchDB - Filtered Replication - Can the speed be improved?

I have a single database (300MB & 42,924 documents) consisting of about 20 different kinds of documents from about 200 users. The documents range in size from a few bytes to many KiloBytes (150KB or so).
When the server is unloaded, the following replication filter function takes about 2.5 minutes to complete.
When the server is loaded, it takes >10 minutes.
Can anyone comment on whether these times are expected, and if not, suggest how I might optimize things in order to
get better performance?
function(doc, req) {
acceptedDate = true;
if(doc.date) {
var docDate = new Date();
var dateKey = doc.date;
docDate.setFullYear(dateKey[0], dateKey[1], dateKey[2]);
var reqYear = req.query.year;
var reqMonth = req.query.month;
var reqDay = req.query.day;
var reqDate = new Date();
reqDate.setFullYear(reqYear, reqMonth, reqDay);
acceptedDate = docDate.getTime() >= reqDate.getTime();
}
return doc.user_id && doc.user_id == req.query.userid && doc._id.indexOf("_design") != 0 && acceptedDate;
}
Filtered replications works slow because for each fetched document runs complex logic to decide whether to replicate it or not:
CouchDB fetches next document;
Because filter function has to be applied the document gets converted to JSON;
JSONifyed document passes through stdio to query server;
Query server handles document and decodes it from JSON;
Now, query server lookups and runs your filter function which returns true or false value to CouchDB;
If result is true document goes to be replicated;
Go to p.1 and loop for all documents;
For non-filtered replications take this list, throw away p.2-5 and let p.6 has always true result. This overhead slows down whole replication process.
To significantly improve filtered replication speed, you may use Erlang filters via Erlang native server. They runs inside CouchDB, doesn't pass through any stdio interface and there is no JSON decode/encode overhead applied.
NOTE, that Erlang query server runs not inside sandbox like JavaScript one, so you need to really trust code that you run with it.
Another option is to optimize your filter function e.g. reduce object creation, method calls, but actually you wouldn't win much with this.

Resources