I am very new to Mybatis and stuck in a situation I have some questions
The complete scenario is I need to read and excel file and insert the excel data in database in two different tables having primary and foreign key relationship .
I am able to read the excel data and able to insert in primary table but not getting how to insert data in second table actually the problem is I have two different pojo classes having separate data for for each table two different mappers.
I am achiving association by defining the pojo of child table inside the pojo of parent class
Is there any way to insert data in two different table.
Is is possible to run 2 insert queries in single tag
Any help would be appreciable
There are lot of ways to do that.
Here is demonstration of one of the most straightforward ways to do that - using separate inserts. The exact solution may vary insignificantly depending mainly on whether primary keys are taken from excel or are generated during insertion into database. Here I suppose that keys are generated during insertion (as this is a slightly more complicated case)
Let's assume you have these POJOs:
class Parent {
private Integer id;
private Child child;
// other fields, getters, setters etc
}
class Child {
private Integer id;
private Parent parent;
// other fields, getters, setters etc
}
Then you define two methods in mapper:
public interface MyMapper {
#Insert("Insert into parent (id, field1, ...)
values (#{id}, #{field1}, ...)")
#Options(useGeneratedKeys = true, keyProperty = "id")
void createParent(Parent parent);
#Insert("Insert into child(id, parent_id, field1, ...)
values (#{id}, #{parent.id}, #{field1}, ...)")
#Options(useGeneratedKeys = true, keyProperty = "id")
void createChild(Child child);
}
and use them
MyMapper myMapper = createMapper();
Parent parent = getParent();
myMapper.createParent(parent);
myMapper.createChild(parent.getChild());
Instead of single child there can be a collection. In that case createChild is executed in the loop for every child.
In some databases (posgresql, sql server) you can insert into two tables in one statement. The query however will be more complex.
Another possibility is to use multiple insert statements in one mapper method. I used code similar to this in postgresql with mapping in xml:
<insert id="createParentWithChild">
insert into parent(id, field1, ...)
values (#{id}, #{field1}, ...);
insert into child(id, parent_id, field1, ...)
values (#{child.id}, #{id}, #{child.field1},...)
</insert>
and method definition in mapper interface:
void createParentWIthChild(Parent parent);
I know this is a little old, but the solution which worked best for me was implementing 2 insert stanzas in my mapping xml.
<insert id="createParent">
insert into parent(id, field1, ...)
values (#{id}, #{field1}, ...);
</insert>
<insert id="createChild">
insert into child(id, parent_id, field1, ...)
values (#{child.id}, #{id}, #{child.field1},...);
</insert>
And then chaining them. ( if the parent call failed do not continue to call the child)
As a side note, In my case I am using camel-mybatis so my camel-config had
<from uri="stream:in"/>
<to uri="mybatis:createParent?statementType=Insert"/>
<to uri="mybatis:createChild?statementType=Insert"/>
Related
I have the following udt type
CREATE TYPE tag_partitions(
year bigint,
month bigint);
and the following table
CREATE TABLE ${tableName} (
tag text,
partition_info set<FROZEN<tag_partitions>>,
PRIMARY KEY ((tag))
)
The table schema is mapped using the following model
case class TagPartitionsInfo(year:Long, month:Long)
case class TagPartitions(tag:String, partition_info:Set[TagPartitionsInfo])
I have written a function which should create an Update.IfExists query: But I don't know how I should update the udt value. I tried to use set but it isn't working.
def updateValues(tableName:String, model:TagPartitions, id:TagPartitionKeys):Update.IfExists = {
val partitionInfoType:UserType = session.getCluster().getMetadata
.getKeyspace("codingjedi").getUserType("tag_partitions")
//create value
//the logic below assumes that there is only one element in the set
val partitionsInfoSet:Set[UDTValue] = model.partition_info.map((partitionInfo:TagPartitionsInfo) =>{
partitionInfoType.newValue()
.setLong("year",partitionInfo.year)
.setLong("month",partitionInfo.month)
})
println("partition info converted to UDTValue: "+partitionsInfoSet)
QueryBuilder.update(tableName).
`with`(QueryBuilder.WHAT_TO_DO_HERE_TO_UPDATE_UDT("partition_info",partitionsInfoSet))
.where(QueryBuilder.eq("tag", id.tag)).ifExists()
}
The mistake was I was adding partitionsInfoSet in the table but it is a Set of Scala. I needed to convert into Set of Java using setAsJavaSet
QueryBuilder.update(tableName).`with`(QueryBuilder.set("partition_info",setAsJavaSet(partitionsInfoSet)))
.where(QueryBuilder.eq("tag", id.tag))
.ifExists()
}
Although, it didn't answer your exact question, wouldn't it be easier to use Object Mapper for this? Something like this (I didn't modify it heavily to match your code):
#UDT(name = "scala_udt")
case class UdtCaseClass(id: Integer, #(Field #field)(name = "t") text: String) {
def this() {
this(0, "")
}
}
#Table(name = "scala_test_udt")
case class TableObjectCaseClassWithUDT(#(PartitionKey #field) id: Integer,
udts: java.util.Set[UdtCaseClass]) {
def this() {
this(0, new java.util.HashSet[UdtCaseClass]())
}
}
and then just create case class and use mapper.save on it. (Also note that you need to use Java collections, until you're imported Scala codecs).
The primary reason for using Object Mapper could be ease of use, and also better performance, because it's using prepared statements under the hood, instead of built statements that are much less efficient.
You can find more information about Object Mapper + Scala in article that I wrote recently.
Is it possible to extend the Generic Inquiry screen so that it shows the number of records retrieved? Or perhaps is it possible to use PXGenericInqGrph to get the number of records of a Generic Inquiry?
However, it is important, for performance reasons that I only retrieve one record with the total from the Database. and not getting all records from the database and doing a Count at the Application layer.
At least up until Acumatica 7.207.0029 there is no method to extend the Generic Inquiry results screen.
If you only need the record count, what you can do is edit your GI or create a copy to get the total and use the special <Count> field to get the record count.
Of course this requires you to set a GroupBy field and you need this to be the same for all records if you want a total record count.
If your query has a field you know to be equal to all records, you can use that field in the GroupBy tab. If not, there is a way to do this by adding a join to an number table.
Number Table Workaround
This technique uses a table with numbers to create specific queries. In this case we can join it to your query to add a known common value to all rows.
Here is the XML for a Customization Project that creates this table and makes it available as the Is.Objects.Core.ISNumbers DAC.
<Customization level="200" description="Number utility table" product-version="17.207">
<Graph ClassName="ISNumbers" Source="#CDATA" IsNew="True" FileType="NewDac">
<CDATA name="Source"><![CDATA[using System;
using PX.Data;
namespace IS.Objects.Core
{
[Serializable]
public class ISNumbers: IBqlTable
{
#region Number
[PXDBInt(IsKey = true)]
[PXUIField(DisplayName = "Number", IsReadOnly = true)]
public int? Number { get; set; }
public class number : IBqlField{}
#endregion
}
}]]></CDATA>
</Graph>
<Sql TableName="ISNumbers" CustomScript="#CDATA">
<CDATA name="CustomScript"><![CDATA[IF OBJECT_ID('ISNumbers', 'U') IS NOT NULL DROP TABLE ISNumbers;
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO ISNumbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE ISNumbers ADD CONSTRAINT PK_ISNumbers PRIMARY KEY CLUSTERED (Number)]]></CDATA>
</Sql>
</Customization>
Just add the table to the GI and crate an INNER JOIN relation where the value of the number field equals 1:
Then you can use this field in the GroupBy condition.
Then you can add the Numbers field and set its value to <Count>. Leave all your other result fields to keep the logic but hide them if you don't need them (they will be automatically grouped by max value).
All queries performed by GIs are executed in the DB so you don't need to worry about it running in the App side.
Looking for a way to do a batch update using slick. Is there an equivalent updateAll to insertALL? Goole research has failed me thus far.
I have a list of case classes that have varying status. Each one having a different numeric value so I cannot run the typical update query. At the same time, I want to save the multiple update requests as there could be thousands of records I want to update at the same time.
Sorry to answer my own question, but what i ended up doing is just dropping down to JDBC and doing batchUpdate.
private def batchUpdateQuery = "update table set value = ? where id = ?"
/**
* Dropping to jdbc b/c slick doesnt support this batched update
*/
def batchUpate(batch:List[MyCaseClass])(implicit subject:Subject, session:Session) = {
val pstmt = session.conn.prepareStatement(batchUpdateQuery)
batch map { myCaseClass =>
pstmt.setString(1, myCaseClass.value)
pstmt.setString(2, myCaseClass.id)
pstmt.addBatch()
}
session.withTransaction {
pstmt.executeBatch()
}
}
It's not clear to me what you are trying to achieve, insert and update are two different operation, for insert makes sense to have a bulk function, for update it doesn't in my opinion, in fact in SQL you can just write something like this
UPDATE
SomeTable
SET SomeColumn = SomeValue
WHERE AnotherColumn = AnotherValue
Which translates to update SomeColumn with the value SomeValue for all the rows which have AnotherColumn equal to AnotherValue.
In Slick this is a simple filter combined with map and update
table
.filter(_.someCulomn === someValue)
.map(_.FieldToUpdate)
.update(NewValue)
If instead you want to update the whole row just drop the map and pass a Row object to the update function.
Edit:
If you want to update different case classes I'm lead to think that these case classes are rows defined in your schema and if that's the case you can pass them directly to the update function since it's so defined:
def update(value: T)(implicit session: Backend#Session): Int
For the second problem I can't suggest you a solution, looking at the JdbcInvokerComponent trait it looks like the update function invokes the execute method immediately
def update(value: T)(implicit session: Backend#Session): Int = session.withPreparedStatement(updateStatement) { st =>
st.clearParameters
val pp = new PositionedParameters(st)
converter.set(value, pp, true)
sres.setter(pp, param)
st.executeUpdate
}
Probably because you can actually run one update query at the time per table and not multiple update on multiple tables as stated also on this SO question, but you could of course update multiple rows on the same table.
Is there a way I can update multiple rows in cassandra database using column family template like supply a list of keys.
currently I am using updater columnFamilyTemplate to loop through a list of a keys and do an update for each row. I have seen queries like multigetSliceQuery but I don't know their equivalence in doing updates.
There is no utility method in ColumnFamilyTemplate that allow you to just pass a list of keys with a list of mutation in one call.
You can implement your own using mutators.
This is the basic code on how to do it in hector
Set<String> keys = MY_KEYS;
Map<String, String> pairsOfNameValues = MY_MUTATION_BY_NAME_AND_VALUE;
Set<HColumn<String, String>> colums = new HashSet<HColumn<String,String>>();
for (Entry<String, String> pair : pairsOfNameValues.entrySet()) {
colums.add(HFactory.createStringColumn(pair.getKey(), pair.getValue()));
}
Mutator<String> mutator = template.createMutator();
String column_family_name = template.getColumnFamily();
for (String key : keys) {
for (HColumn<String, String> column : colums) {
mutator.addInsertion(key, BASIC_COLUMN_FAMILY, column);
}
}
mutator.execute();
Well it should look like that. This is an example for insertion, be sure to use the following methods for batch mutations:
mutator.addInsertion
mutator.addDeletion
mutator.addCounter
mutator.addCounterDeletion
since this ones will execute right away without waiting for the mutator.execute():
mutator.incrementCounter
mutator.deleteCounter
mutator.insert
mutator.delete
As a last note: A mutator allows you to batch mutations on multiple rows on multiple column families at once ... which is why I generally prefer to use them instead of CF templates. I have a lot of denormalization for functionalities that use the "push-on-write" pattern of NoSQL.
You can use a batch mutation to insert as much as you want (within thrift_max_message_length_in_mb). See http://hector-client.github.com/hector//source/content/API/core/1.0-1/me/prettyprint/cassandra/model/MutatorImpl.html.
Forgive my ignorance with Linq to SQL but...
How do you query mulitple tables in one fell swoop?
Example:
I want to query, say 4 tables for a title that includes the following word "penguin". Funnily enough each table also has a field called TITLE.
Tables are like so:
I want to query each table (column: TITLE) for the word "penguin". Each table is referenced (via foreign key) to a parent table that is simply called Reference, and is linked on a column called REF_ID. So ideally the result should come back with a list of REF_ID's where the query criteria was matched.
If you can help you will be richly rewarded....... (with a green tick ;)
The code I have works for just one table - but not for two:
var refs = db.REFERENCEs
.Include(r => r.BOOK).Where(r => r.BOOK.TITLE.Contains(titleString)).Include(r => r.JOURNAL.AUTHORs)
.Include(r => r.JOURNAL).Where(r => r.JOURNAL.TITLE.Contains(titleString));
I had a similar scenario a while back and ended up creating a view that unioned my tables and then mapped that view to a LINQ-to-SQL entity.
Something like this:
create view dbo.References as
select ref_id, title, 'Book' as source from dbo.Book
union all
select ref_id, title, 'Journal' from dbo.Journal
union all
select ref_id, title, 'Magazine' from dbo.Magazine
union all
select ref_id, title, 'Report' from dbo.Report
The mapping would look like this (using attributes):
[Table(Name="References")]
public class Reference {
[Column(Name="Ref_Id", IsPrimaryKey=true)]
public int Id {get;set;}
[Column]
public string Title {get;set;}
[Column]
public string Source {get;set;}
}
Then a query might look like this:
var query = db.GetTable<Reference>().Where(r => r.Title.Contains(titleString));