ServiceStack.OrmLite with a DateTime.Month Predicate - predicate

While using ServiceStack.OrmLite 3.9.70.0, and following some of the examples from the ServiceStack.OrmLite wiki.
I am trying to select rows where the LastActivity date month = 1.
I keep getting the error:
{"variable 'pp' of type 'Author' referenced from scope '', but it is not defined"}
LastActivity is a nullable DateTime, defind like:
public DateTime ? LastActivity { get; set;}
I have tried:
db.Select<Author>(q => q.LastActivity.Value.Month == 1);
AND
var visitor = db.CreateExpression<Author>();
db.Select<Author>(visitor.Where(q => q.LastActivity.Value.Month == 1));
AND
SqlExpressionVisitor<Author> ev = OrmLiteConfig.DialectProvider.ExpressionVisitor<Author>();
db.Select<Author>(ev.Where(q => q.LastActivity.Value.Month == 1));
AND
var predicate = ServiceStack.OrmLite.PredicateBuilder.True<Author>();
predicate = predicate.And(q => q.LastActivity.Value.Month == 1);
db.Select<Author>(predicate);
I am trying to avoid using a sql string in the select because I like the compile time checking of the field names and types.

do a less than and more than on the date field IE
LastActivity >= variableThatHoldsStartDateOfMonth && LastActivity <= VariableThatHoldsLastDayOfMOnth.
This will give you results for the whole month

Related

Oracle: date not valid for month specified in groovy

Using RestApi i am trying to fetch response and saving the data into database using groovy and java.
I have field Allocation Date where the format i am receiving from RestApi is for example
'"2020-06-30".
So in my java IData class i have created method as IN_DATE_FORMAT for input format:
public static final DateFormat IN_DATE_FORMAT = new SimpleDateFormat("yyyy-MM-dd");
And in groovy class i have created constructor and using this method like below:
def contents = json.Contents as ArrayList
contents.parallelStream().each { rec ->
IData data = new IData ()
def allocDate = rec["Allocation Date"]
data.allocationDate = allocDate != null ? IData .IN_DATE_FORMAT.parse(allocDate as String) : null
When i am running the code i am getting in between few error as
ORA-01839: date not valid for month specified
And i am receiving corrupt date as for example '31-Apr-20' which is normally not correct as in April month we dont have date 31. And this value is not coming from RestApi. I dont understand from where this corrupt date value is coming from. Is it because of conversion and parsing the date into String ?
Can we use something like LocalDate in my code as this looks risky of parsing date into string ? And i think because of which its trying to store incorrect date value in database which even does not exist and received from RestApi.
Below are my logs for def allocDate = rec["Allocation Date"]:
WARNING: Failed to execute: insert into XXX (
ALLOCATIONDATE
) values (
'31-Apr-20'
)
because: ORA-01839: date not valid for month specified
10:57:16.103 [Actor Thread 3] WARN XXXX - Failed to execture record due to:ORA-01839: date not valid for month specified
Allocation Date:- 2020-06-30
Allocation Date:- 2020-06-25
Allocation Date:- 2020-07-30
Allocation Date:- 2020-05-30
Allocation Date:- 2020-06-25
....
Below is my Insert into DB method:
private boolean insertIntoDb(Map<String, List<IData>> ricMap) {
Sql conn = sql(ORACLE)
conn.withTransaction {
ricMap.entrySet().parallelStream().forEach { entry ->
entry.value.parallelStream().forEach { val ->
String values = """
${nullStr(val.allocationDate != null ? DbConnCfg.ORACLE_DATE_FORMAT.format(val.allocationDate) : null)}
"""
}
}
}
In DbConnCfg class i have defined ORACLE_DATE_FORMAT method as below:
public static final DateFormat ORACLE_DATE_FORMAT = new SimpleDateFormat("dd-MMM-yy");
You can parse a given date by using Date.parse(format, date). In your case, 31 April is parsed as May 1st. I don't know if this will solve your problem.
dt1 = '30-Apr-20'
dt2 = '31-Apr-20'
def format(dt) {
def newDt = new Date().parse("dd-MMM-yy", dt) //reverse-engineers the date, keeps it a Date object
return newDt.format("yyyy-MM-dd") //apply the format you wish
}
assert format(dt1) == '2020-04-30'
assert format(dt2) == '2020-05-01'

linq to entities: compare string variable with NVarchar field

This code results in timeout exception
String City_Code = null;
var result = (from r in myContext.TableA
where ( City_Code == r.City_Code )
select r ).ToList();
while this code will return quickly
String City_Code = null;
var result = (from r in myContext.TableA
where ( r.City_Code == City_Code )
select r ).ToList();
The difference is in the order of operands of the equality. The field City_Code of table TableA is of type nvarchar(20).
What is a possible reason for that???
update: The generated tsql for the 2 queries are identical except for that part: the first one has the condition " #p__linq__2 = [Extent1].[City_Code])" while the second has " [Extent1].[City_Code] = #p__linq__2)"
The value of #p_linq_2 when I notice the time difference is NULL. If I run the queries in ssms with NULL in place of #p_linq_2 they respond both very quickly

Null value comparision in slick

I encounter a problem of nullable column comparison.
If some columns are Option[T], I wonder how slick translate like === operation on these columns to sql.
There are two possibilities: null value( which is None in scala ) and non-null value.
In the case of null value, sql should use is instead of =.
However slick doesn't handle it correctly in the following case.
Here is the code (with H2 database):
object Test1 extends Controller {
case class User(id: Option[Int], first: String, last: String)
class Users(tag: Tag) extends Table[User](tag, "users") {
def id = column[Int]("id",O.Nullable)
def first = column[String]("first")
def last = column[String]("last")
def * = (id.?, first, last) <> (User.tupled, User.unapply)
}
val users = TableQuery[Users]
def find_u(u:User) = DB.db.withSession{ implicit session =>
users.filter( x=> x.id === u.id && x.first === u.first && x.last === u.last ).firstOption
}
def t1 = Action {
DB.db.withSession { implicit session =>
DB.createIfNotExists(users)
val u1 = User(None,"123","abc")
val u2 = User(Some(1232),"123","abc")
users += u1
val r1 = find_u(u1)
println(r1)
val r2 = find_u(u2)
println(r2)
}
Ok("good")
}
}
I print out the sql. It is following result for the first find_u.
[debug] s.s.j.J.statement - Preparing statement: select x2."id", x2."first", x2."last" from "users" x2 where (
(x2."id" = null) and (x2."first" = '123')) and (x2."last" = 'abc')
Notice that (x2."id" = null) is incorrect here. It should be (x2."id" is null).
Update:
Is it possible to only compare non-null fields in an automatic fashion? Ignore those null columns.
E.g. in the case of User(None,"123","abc"), only do where (x2."first" = '123')) and (x2."last" = 'abc')
Slick uses three-valued-logic. This shows when nullable columns are involved. In that regard it does not adhere to Scala semantics, but uses SQL semantics. So (x2."id" = null) is indeed correct under these design decisions. To to a strict NULL check use x.id.isEmpty. For strict comparison do
(if(u.id.isEmpty) x.id.isEmpty else (x.id === u.id))
Update:
To compare only when the user id is non-null use
(u.id.isEmpty || (x.id === u.id)) && ...

Most efficient way to read from bottom of Azure Table Storage

I have a an Azure table which serves as an event log. I need the most efficient way to read the bottom of the table to retrieve the most recent entries.
What is the most efficient way of doing this?
First of all, I would really advice you to base your partition key on UTC ticks. You can do it in a way that all the antities are ordered from latest to oldest.
Then if you want to get lets say 100 latest logs you just call (lets say that query is IQueryable something from your favorite client - we use Lucifure Stash): query.Take(100);
If you want to fetch entities for certain period you write: query.Where(x => x.PartitionKey <= value); or something similar.
The "value" variable has to be constructed based on the way you construct the values for partition key.
Assuming you want to fetch the data for last 15 minutes, try this pseudo code:
DateTime toDateTime = DateTime.UtcNow;
DateTime fromDateTime = toDateTime.AddMinutes(-15);
string myPartitionKeyFrom = fromDateTime.ToString("yy-MM");
string myPartitionKeyTo = toDateTime.ToString("yy-MM");
string query = "";
if (myPartitionKeyFrom.Equals(myPartitionKeyTo))//In case both time periods fall in same month, then we can directly hit that partition.
{
query += "(PartitionKey eq '" + myPartitionKeyFrom + "') ";
}
else // Otherwise we would need to do a greater than and lesser than stuff.
{
query += "(PartitionKey ge '" + myPartitionKeyFrom + "' and PartitionKey le '" + myPartitionKeyTo + "') ";
}
query += "and (RowKey ge '" + fromDateTime.ToString() + "' and RowKey le '" + toDateTime.ToString() + "')";
If you want to fetch latest 'n' number of entries then you need to slightly modify your PartitionKey and RowKey value, So that latest entries will be pushed to the top of the table.
For this you need to compute both the keys using DateTime.MaxValue.Subtract(DateTime.UtcNow).Ticks; instead of DateTime.UtcNow.
Microsoft provides a SemanticLogging framework that has a specific sink to log to Azure Table.
If you look at the library code, it generates a partition key (in reverse order) based on a Datetime :
static string GeneratePartitionKeyReversed(DateTime dateTime)
{
dateTime = dateTime.AddMinutes(-1.0);
return GetTicksReversed(
new DateTime(dateTime.Year, dateTime.Month, dateTime.Day, dateTime.Hour, dateTime.Minute, 0));
}
static string GetTicksReversed(DateTime dateTime)
{
return (DateTime.MaxValue - dateTime.ToUniversalTime())
.Ticks.ToString("d19", (IFormatProvider)CultureInfo.InvariantCulture);
}
So you can implement the same logic in your application to build your partitionkey.
If you want to retrieve the logs for a specific date range, you can write a query that looks like that:
var minDate = GeneratePartitionKeyReversed(DateTime.UtcNow.AddHours(-2));
var maxDate = GeneratePartitionKeyReversed(DateTime.UtcNow.AddHours(-1));
// Get the cloud table
var cloudTable = GetCloudTable();
// Build the query
IQueryable<DynamicTableEntity> query = cloudTable.CreateQuery<DynamicTableEntity>();
// condition for max date
query = query.Where(a => string.Compare(a.PartitionKey, maxDate,
StringComparison.Ordinal) >= 0);
// condition for min date
query = query.Where(a => string.Compare(a.PartitionKey, minDate,
StringComparison.Ordinal) <= 0);3

What is the best way to deal with nullable string columns in LinqToSql?

Assume you have a table with a nullable varchar column. When you try to filter the table, you would use (pFilter is parameter):
var filter = pFilter;
var dataContext = new DBDataContext();
var result = dataContext.MyTable.Where(x=>x.MyColumn == filter).ToList();
Now, what if there is a keyword that means "All Nulls". The code would look like:
var filter = pFilter != "[Nulls]" ? pFilter : null;
var dataContext = new DBDataContext();
var result = dataContext.MyTable.Where(x=>x.MyColumn == filter).ToList();
But this doesn't work. Apparently, a String with value of null is... not null?
However, what do work is this code:
var filter = pFilter != "[Nulls]" ? pFilter : null;
var dataContext = new DBDataContext();
var result = dataContext.MyTable.Where(x=>x.MyColumn == filter || (filter == null && x.MyColumn == null)).ToList();
The workaround did not convinced me, that's why my question is: What is the best way to deal with nullable string columns in LinqToSql?
Use String.Equals that will make LINQ handle null appropriately on the generated SQL query
var result = dataContext.MyTable
.Where(x => String.Equals(x.MyColumn, filter))
.ToList();
Edit:
If you use == LINQ will generate the query for the general case WHERE [column] = #parameter but on SQL NULL does not match NULL, the proper way to test for NULL is [column] IS NULL.
With String.Equals LINQ has enough information to translate the method to the appropiate sentence in each case, what means:
if you pass a non-null string it will be
WHERE ([column] IS NOT NULL) AND ([column] = #parameter)
and if it is null
WHERE [column] IS NULL

Resources