DBUnit with HSQLDB: String column too short - string

I have an entity with the following attribute
#Lob
#NotNull
private String myContent;
Now, in my production setup I use a CLOB for representation in the database since the content can be several thousands of chars. However, for unit tests an in-memory HSQLDB is used. During the unit test I get this error
Caused by: org.hsqldb.HsqlException: data exception: string data, right truncation
at org.hsqldb.error.Error.error(Unknown Source)
As far as my research revealed, the reason should be that DBUnit creates a 255 char column for the string, automatically. And in my case it is not long enough for the content I insert. So, what could I do about this?

Try something like this:
#Column(columnDefinition = "VARCHAR", length = 65535)
#Lob
#NotNull
private String myContent;
That should cause a larger column to be created.

Related

Filtering operations on a Dataset are failing with familiar error "AnalysisException: cannot resolve given input columns"

I am working off Kafka 2.3.0 and Spark 2.3.4. I am trying to operate with a Dataset by running a filter on it. However I get the analysis exception error and am not able to figure out the resolution. Note that the above POJO dataset by itself is good and prints well on the console.
The code is in continuation of this. Do note that the POJO dataset is made from streaming data coming in via Kafka.
Looked at the column names to find mismatches if any, tried variations of the filter statement by using lambdas as well as sql. I think I'm missing something in terms of understanding to get it to work.
Here is the POJO class:
public class Pojoclass2 implements Serializable {
private java.sql.Date dt;
private String ct;
private String r;
private String b;
private String s;
private Integer iid;
private String iname;
private Integer icatid;
private String cat;
private Integer rvee;
private Integer icee;
private Integer opcode;
private String optype;
private String opname;
public Pojoclass2 (){}
...
//getters and setters
}
//What works (dataAsSchema2 is a Dataset<Row> formed out of incoming streaming data of a kafka topic):
Encoder<Pojoclass2> encoder = Encoders.bean(Pojoclass2.class);
Dataset<Pojoclass2> se= new Dataset<Pojoclass2>(sparkSession,
dataAsSchema2.logicalPlan(), encoder);
//I can print se on a console sink and it is all good. I can do all filtering on se but can only receive the return value as Dataset<Row>.
//What doesnt work(it compiles but throws the analysis exception at runtime:
Dataset<Pojoclass2> h = se
.filter((FilterFunction<Pojoclass2>) s -> s.getBuyerName() == "ASD");
//or
Dataset<Pojoclass2> h = se
.filter((FilterFunction<Pojoclass2>) s -> s.getBuyerName() == "ASD").as(Encoders.bean(Pojoclass2.class));
And the error trace (note that this is the actual. In Pojoclass2 I've changed the attribute names to protect confidentiality. You may see differences in the names, the types match):
"
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`contactRole`' given input columns: [iname, ct, icatid, s, r, b, opname, cat, opcode, dt, iid, icee, optype, rvee];;
'TypedFilter ibs.someengine.spark.somecore.SomeMain$$Lambda$17/902556500#23f8036d,
...
...
"
I expect the filter should run properly and h should contain the filtered strongly typed rows.
Currently, I am working by converting it to a DataFrame (Dataset<Row>) but that sort of defeats the purpose (I guess).
I also noticed that very few operations on strongly typed Datasets seem to be supported that allow operating via the bean class. Is that a valid understanding?
Thanks!

How to avoid exponential format for string which store double values from Database

I have a class like below from which is used to generating the XML using marshaller
I have a Database float column with 126 bytes which store big numeric values
For example: Salary FLOAT(126)
In class i have mappings like below
#NamedQuery(name = "example", query = "SELECT test FROM example test WHERE (test.xmlGenerate IS NULL OR NOT test.xmlGenerate = 'Y')")
public class Example {
#Column(name = "SALARY")
private String monthlySalary;
}
Generated getters and setters for the salary in the class
Then I will be generating the xml using JAXB marshaller using code below
JAXBContext context = JAXBContext.newInstance(Example.getClass());
Marshaller m = context.createMarshaller();
m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
m.marshal(object, writer);
Now my problem is if the salary is having more than 8 digits it is getting generated to exponential form
Example: 110016400 to 1.100164E8
Please suggest on this how can I avoid exponential form because already I am using the string in place of double or long column so that it wont be converted to exponential form, even though it is getting converted to exponential form .Please suggest on this to avoid exponential form .
Use this
Retrieve the value from the DB as double, say
public double getSalary() {
return salary;
}
Convert it to String
BigDecimal.valueOf(getSalary()).toPlainString();

How to pass string as value in mapper?

I am trying to pass a string as value in the mapper, but getting error that it is not Writable. How to resolve?
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String TempString = value.toString();
String[] SingleRecord = TempString.split("\t");
//using Integer.parseInt to calculate profit
int Amount = Integer.parseInt(SingleRecord[7]);
int Asset = Integer.parseInt(SingleRecord[8]);
int SalesPrice = Integer.parseInt(SingleRecord[9]);
int Profit = Amount*(SalesPrice-Asset);
String ValueProfit = String.valueOf(Profit);
String ValueOne = String.valueOf(one);
custID.set(SingleRecord[2]);
data.set(ValueOne + ValueProfit);
context.write(custID, data);
}
Yahoo's tutorial says :
Objects which can be marshaled to or from files and across the network must obey a particular interface, called Writable, which allows Hadoop to read and write the data in a serialized form for transmission.
From Cloudera site :
The key and value classes must be serializable by the framework and hence must implement the Writable interface. Additionally, the key classes must implement the WritableComparable interface to facilitate sorting.
So you need an implementation of Writable to write it as a value in the context. Hadoop ships with a few stock classes such as IntWritable. The String counterpart you are looking for is the Text class. It can be used as :
context.write(custID, new Text(data));
OR
Text outValue = new Text();
val.set(data);
context.write(custID, outValue)
I case, you need specialized functionality in the value class, you may implement Writable (not a big deal after all). However seems like Text is just enough for you.
you havent set data in map function according to import text in above,and TextWritable is wrong just use Text as well.

Visual basic string issues

Every time i try to attribute any type of string to this i get Object reference not set to an instance of an object. I have tried every combination of possible way to handle the string, convert it to a string again and all the fuzz. It's very frustrating and i guess it's some kind of base principle of the structure/class usage and the string array or whatnot (which is also very dumb)
Private Class movie
Public name As String
Public actors As String
Public year As Integer
Public country As String
Public votes As Integer
End Class
Private movies() As movie
If File.Exists(OpenFileDialog1.FileName) Then
lblPath.Text = OpenFileDialog1.FileName
Dim iFile As New StreamReader(lblPath.Text)
While Not iFile.EndOfStream
current = iFile.ReadLine
movies(i).name = "sasasasa"
lbMovies.Items.Add(movies(i).name)
i = i + 1
End While
End If
these are the code parts where i use it
You are creating an empty array of movie objects, as was pointed out previously. Consequently movies(i) is Nothing. When you try to access a member (movies(i).name) the appropriate exception is generated. Note that your code does not even reach the assignment operator = but fails prior to that. In other words, this has nothing to do with strings altogether; you will get the same error if you write movies(i).votes = 42 instead. To fix your code you will first have to create a movie object, populate it and append it to your array.

subsonic - Offset and length were out of bounds for the array intermittent error

inherited a website which uses subsonic 2.0 and gets an intermittent error of "Offset and length were out of bounds for the array" . If we were to restart the app or recycle the app pool, the issue would go away. I suspect it has something to do with subsonic caching the table schema based on the error log below. Has anyone experience this issue and can suggest a fix?
System.ArgumentException
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
System.Exception: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ArgumentException: Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
at System.Array.BinarySearch[T](T[] array, Int32 index, Int32 length, T value, IComparer1 comparer)
at System.Collections.Generic.SortedList2.IndexOfKey(TKey key)
at System.Collections.Generic.SortedList`2.ContainsKey(TKey key)
at SubSonic.DataService.GetSchema(String tableName, String providerName, TableType tableType)
at SubSonic.DataService.GetTableSchema(String tableName, String providerName)
at SubSonic.Query..ctor(String tableName)
at G05.ProductController.GetProductByColorName(Int32 productId, String colorName) in C:\Projects\G05\Code\BusinessLogic\ProductController.vb:line 514
Strange that it's intermittent . How are the objects being generated? Is it using the .abp file? If so, I'd recommend running the files through the subcommander to hard generate the classes. That way the generation of the objects isn't ever executed on production environment.

Resources