mybatis spring batch +sybase: trying to get the database identity value after insertion to assign it to the id field in pojo - sap-ase

my code looks like the sample code given below.
--table create statement
CREATE TABLE LOG
(
uniqueID NUMERIC(20,0) IDENTITY,
NAME VARCHAR(20) NOT NULL,
DESCRIPTION VARCHAR(200) NOT NULL,
USR VARCHAR(20) NOT NULL
)
--pojo class
public class Log
{
private long identifier;
private String name;
private String description;
private String user;
//getters+setters......
}
--insert statement in mapper
<insert id="insertRecord" parameterType="com.xxx.yyy.zzz.model.Log" useGeneratedKeys="true" keyProperty="identifier" keyColumn="uniqueID">
INSERT INTO LOG (NAME, DESCRIPTION, USR)
VALUES (#{log.name}, #{log.description}, #{log.user})
</insert>
issue: when i try to run this code against sybase database, am getting NullPointerException. When i tried to debug it, error came from within SybStatement.class. Sorry am not able to provide entier stacktrace due to constraint in copy/paste at my work station.
I am able to run the same code against H2 database successfully. Records got inserted and "identifier" in Log object is having the identify value same as database rows.
Did you face this issue in sybase?. Please share if anyone is having code for showing the usage of "useGeneratedKeys" mybatis feature in sybase..
Note:
I am running this insert statement using MybatisBatchItemWriter.
I tried to use two different sqlsessiontemplate objects for chunk reader & chunk writer and it didn't resolve the issue.
I am using jconn3 sybase jdbc jar, mybatis 3.4.4 and mybatis-spring 1.3.1 jar.
Thanks in advance

In SQL terms, you need to do SELECT ##IDENTITY to pick up the generated value. Thecquestion is if your framework generates such SQL...

Related

Seeder method for Azure Database in EF Core 2

What is the proper method to seed data into an Azure Database? Currently in development I have a seeder method that inserts the first couple of users as well as products. The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?
As far as the products are concerned, I have a json file with the product names and descriptions - which in development the seeder method iterates through and inserts the data.
To answer your question "The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?"
No you should keep your password in cleartext format, though you can keep it it encrypet mode and seed it.
In EF Core 2.1, the seeding workflow is quite different. There is now Fluent API logic to define the seed data in OnModelCreating. Then, when you create a migration, the seeding is transformed into migration commands to perform inserts, and is eventually transformed into SQL that that particular migration executes. Further migrations will know to insert more data, or even perform updates and deletes, depending on what changes you make in the OnModelCreating method.
Suppose thethree classes in my model are Magazine, Article and Author. A magazine can have one or more articles and an article can have one author. There’s also a PublicationsContext that uses SQLite as its data provider and has some basic SQL logging set up.
Let take an example of single entity type.
Let’s start by seeing what it looks like to provide seed data for a magazine—at its simplest.
The key to the new seeding feature is the HasData Fluent API method, which you can apply to an Entity in the OnModelCreating method.
Here’s the structure of the Magazine type:
public class Magazine
{
public int MagazineId { get; set; }
public string Name { get; set; }
public string Publisher { get; set; }
public List<Article> Articles { get; set; }
}
It has a key property, MagazineId, two strings and a list of Article types. Now let’s seed it with data for a single magazine:
protected override void OnModelCreating (ModelBuilder modelBuilder)
{
modelBuilder.Entity<Magazine> ().HasData
(new Magazine { MagazineId = 1, Name = "MSDN Magazine" });
}
A couple things to pay attention to here: First, I’m explicitly setting the key property, MagazineId. Second, I’m not supplying the Publisher string.
Next, I’ll add a migration, my first for this model. I happen to be using Visual Studio Code for this project, which is a .NET Core app, so I’m using the CLI migrations command, “dotnet ef migrations add init.” The resulting migration file contains all of the usual CreateTable and other relevant logic, followed by code to insert the new data, specifying the table name, columns and values:
migrationBuilder.InsertData(
table: "Magazines",
columns: new[] { "MagazineId", "Name", "Publisher" },
values: new object[] { 1, "MSDN Magazine", null });
Inserting the primary key value stands out to me here—especially after I’ve checked how the MagazineId column was defined further up in the migration file. It’s a column that should auto-increment, so you may not expect that value to be explicitly inserted:
MagazineId = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true)
Let’s continue to see how this works out. Using the migrations script command, “dotnet ef migrations script,” to show what will be sent to the database, I can see that the primary key value will still be inserted into the key column:
INSERT INTO "Magazines" ("MagazineId", "Name", "Publisher")
VALUES (1, 'MSDN Magazine', NULL);
That’s because I’m targeting SQLite. SQLite will insert a key value if it’s provided, overriding the auto-increment. But what about with a SQL Server database, which definitely won’t do that on the fly?
I switched the context to use the SQL Server provider to investigate and saw that the SQL generated by the SQL Server provider includes logic to temporarily set IDENTITY_INSERT ON. That way, the supplied value will be inserted into the primary key column. Mystery solved!
You can use HasData to insert multiple rows at a time, though keep in mind that HasData is specific to a single entity. You can’t combine inserts to multiple tables with HasData. Here, I’m inserting two magazines at once:
modelBuilder.Entity<Magazine>()
.HasData(new Magazine{MagazineId=2, Name="New Yorker"},
new Magazine{MagazineId=3, Name="Scientific American"}
);
For a complete example , you can browse through this sample repo
Hope it helps.

Wso2 Dss insert null cassandra

I use wso2 dss to insert data into a cassandra table.
for exemple this table :
CREATE TABLE logs.test (id int,code int, PRIMARY KEY (id));
Inside wso2 dss, I defined code column with default value like this : #{NULL}
When I Try the dss service like this without given the code parameter:
<p:test xmlns:p="http://ws.wso2.org/dataservice">
<xs:id xmlns:xs="http://ws.wso2.org/dataservice">1</xs:id>
</p:test>
I get this error :
<axis2ns56:source_data_service>
<axis2ns56:data_service_name>Cassandra</axis2ns56:data_service_name>
<axis2ns56:description>N/A</axis2ns56:description>
<axis2ns56:location>\Cassandra.dbs</axis2ns56:location>
<axis2ns56:default_namespace>http://ws.wso2.org/dataservice</axis2ns56:default_namespace>
</axis2ns56:source_data_service>
<axis2ns56:ds_code>UNKNOWN_ERROR</axis2ns56:ds_code>
<axis2ns56:nested_exception>java.lang.NumberFormatException: null</axis2ns56:nested_exception>
Nested Exception:- java.lang.NumberFormatException: For input string: "null"
Best regards,
Nicolas
Would it be possible to get the source of the dataservice?
Did you try with the following payload
<p:test xmlns:p="http://ws.wso2.org/dataservice">
<p:id>1</p:id>
<p:code>2</p:code>
</p:test>
So I guess your issue is in this part
<param defaultValue="#{NULL}" name="code" sqlType="INTEGER"/>.
I do not know your use case but if I remember well it's not so nice to insert null values in Cassandra because it create tombstones.
You could as well have a second query that simply inserts the id like
insert test (id) values (:id).
The execption sound to be raised by dss not cassandra, looks like it is not able to set a null value for integer field
I find a workaround, I use the jdbc cassandra instead of com.datatasax driver.
And it work well. The only problem is that I just can call only one node for the connection and not the cluster.
I hope the problem will be resolve soon and I will use the Dss Cassandra datasource connection again.
Thks for your help

Specify columns in empty IQueryable in WCF Data Service?

I have a DataService with the following queryable:
public IQueryable<MyData> MyDataList => myDataList.AsQueryable();
I connect to this data service in Excel 2016. Everything works, but when the list is empty I get the following error message.
The query did not run or the Data Model could not be accessed.
Here's the error message we got:
An Evaluate statement cannot return a table without columns
It seems the client (Excel) needs an object to successfully determine the columns. Why? Is it possible to tell the client about the columns without the need for an object?
Can't you check the resulted query first to see if it contains anything and return the actual data or a default one?
var defaultList = new List<MyData>();
public IQueryable<MyData> MyDataList = (myDataList.Any())?myDataList.AsQueryable():defaultList.AsQueryable();

Simplest way to insert data into a fresh Cassandra database using the Hector API?

I've followed numerous examples on inserting data into a Cassandra database and every time I get an exception about unconfigured column families.
Exception in thread "main" me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:unconfigured columnfamily TestColumnFamily)
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:45)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:252)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at CassandraInterface.main(CassandraInterface.java:101)
Caused by: InvalidRequestException(why:unconfigured columnfamily TestColumnFamily)
at org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:19477)
at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:1035)
at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:1009)
at me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246)
at me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:246)
... 4 more
So I looked up how to configure them and found
BasicColumnFamilyDefinition cfdef = new BasicColumnFamilyDefinition();
cfdef.setKeyspaceName(keyspaceName);
cfdef.setName(columnFamilyName);
cfdef.setKeyValidationClass(ComparatorType.UTF8TYPE.getClassName());
cfdef.setComparatorType(ComparatorType.UTF8TYPE);
That didn't configure the column family.
All of the examples I have found are fragments without any context, so I don't know what to import or set up. In addition, some examples appear to mix the Hector API v2 and the original Hector API, so when I use them, I get "class not found" or "function not found" compiler errors.
Hector CassandraClusterTest.java
#Test
public void testAddDropColumnFamily() throws Exception {
ColumnFamilyDefinition cfDef = HFactory.createColumnFamilyDefinition("Keyspace1", "DynCf");
cassandraCluster.addColumnFamily(cfDef);
String cfid2 = cassandraCluster.dropColumnFamily("Keyspace1", "DynCf");
assertNotNull(cfid2);
// Let's wait for agreement
cassandraCluster.addColumnFamily(cfDef, true);
cfid2 = cassandraCluster.dropColumnFamily("Keyspace1", "DynCf", true);
assertNotNull(cfid2);
}
Long story short, keyspace and column family need to exist before you try and insert data into them. You can either manage this in your code, to check to see if they exist, using the example above as a nice reference -- or modify via the command line interface (cassandra-cli)
Hector Unit Tests
Hopefully you've been able to do this by now but this is how I've done it.
I have a cassandra install (using 1.1.4) and assuming you have all the necessary directories created:
/var/lib/cassandra
/var/lib/casandra/data
/var/lib/cassnadra/commitlogs
/var/lib/cassandra/saved_caches
I start it using:
bin/cassandra -f
I create a simple script called schema_create.txt:
CREATE KEYSPACE TEST
WITH strategy_class = 'org.apache.cassandra.locator.SimpleStrategy'
AND strategy_options:replication_factor='1';
use TEST;
CREATE COLUMNFAMILY TestColumnFamily(
userid varchar,
firstname varchar,
lastname varchar,
PRIMARY KEY (userid));
Then from the command line you can run this script using the new CQL tool that comes with cassandra as follows:
bin/cqlsh --cql3 < schema_createt.txt
This will install a keyspace named test with a column family named testcolumnfamily into cassandra.
Now from within your java application you can simply create a test class that has a main method (i will assume your development environment has all necessary dependencies if using maven):
try{
Mutator mutator = HFactory.createMutator(kweyspace, stringSerializer.get());
mutator.addInsertion("iamauser", "tescolumnfamily", HFactory.createStringColumn("firstname", "John"));
mutator.addInsertion("iamauser", "testcolumnfamily", HFactory.createStringColumn("lastname", "Smith"));
mutator.execute();
}
catch(HectorException Hex){ Hex.printStackTrace(); }
finally{ cluster.getConnectionManger().shutdown(); }
Now go back to the command line and enter into cassandra using:
$bin/cqlsh --cql3
use test;
select * from testcolumnfamily;
This will insert a row of data into your cassandra db with the key iamauser, and name as John Smith and you can verify as shown above using the cqlsh tool.
Hope this helps.

Subsonic BatchQuery.Queue causing 'Can't decide which property to consider the key...' exception

I'm just getting started with Subsonic 3.0 ActiveRecord and am trying to implement a batch query like the one in the SubSonic docs. I'm using a batch so I can query a User and a list of the users Orders in one shot.
When I call the BatchQuery.Queue() method, adding my "select user" query, SubSonic throws the following exception:
System.InvalidOperationException : Can't decide which property to consider the Key - you can create one called 'ID' or mark one with SubSonicPrimaryKey attribute
The code is as follows:
var db = new MyDB();
var userQuery = from u in db.Users //gets user by uid
where u.uid == 1
select u;
var provider = ProviderFactory.GetProvider();
var batch = new BatchQuery(provider);
batch.Queue(userQuery); //exception here
//create and add "select users orders" query here...
First things first - Why this error? My SubSonic Users object knows it's PK. "uid" is the PK in the database and the generated code reflects this. And I thought SubSonicPrimaryKey attribute was for the SimpleRepository? Is this way of batching not for ActiveRecord?
I could ask a number of other questions, but I'll leave it at that. If anyone can help me figure out what is going on and how to issue 2 batched queries I'd be grateful!
Edit - after further investigation
I ran through the source code with the debugger. Adam is correct - the ToSchemaTable() method in Objects.cs is apparently building out my schema and failing to find a PK. At the very end, it tries to find a column property named "ID" and flags this as the PK, otherwise it throws the exception. I added a check for "UID" and this works!
Still... I'm confused. I'm admittedly a bit lost after peeling back layer after layer of the source, but it seems like this portion of code is trying to build up a schema for my table and completely ignoring my generated User class - which quite nicely identifies which column/property is the PK! It doesn't seem quite right that I'd be required to name all keys "ID" w/ ActiveRecord.
I think the answer you're looking for is that this is a really stupid bug on my part. I'm hoping to push another build next week and if you could put this on the issue list I'd really appreciate it. My apologies...
SubSonic expects your primary key to be called Id so it's getting confused. SubSonicPrimaryKey is for simple repository but I assume where that exception is being thrown is shared between the different templates. If you rename your PK to Id or id or ID your query will work.

Resources