Groovy addBatch/executeBatch with autoGenerated keys - groovy

Has anyone retrieved the auto-generated keys for a database insert while using Groovy SQL's withBatch method? I have the following code
def Sql target = ...//database connection
target.withBatch { ps ->
insertableStuff.each { ps.addBatch ( it ) }
ps.executeBatch()
def results = ps.getGeneratedKeys() //what do I do with this?
}
We're using DB2, and I've successfully tested the getGeneratedKeys method with a single statement/result set, but once I wrap the process in a batch, I'm not sure what objects I'm dealing with anymore.
According to IBM, it is possible to get the results back, but their example is using standard JDBC objects, not the groovy ones. Any ideas?

I took the Groovy SQL stuff out the picture to see if I could get something working, I wanted to make sure that DB2 for z/OS actually supported the function, and was able to get the generated values. I was using IBM's example, however I had to add some extra code to handle for the casting that the IBM example is using.
SQL target = ...//get database connection
def preparedStatement = target.connection.prepareStatement(statement, ['ISN'] as String[])
ResultSet[] resultSets = ((DB2PreparedStatement) (ps.getDelegate().getDelegate())).getDBGeneratedKeys()
resultSets.each { ResultSet results ->
while(results.next()) {
println results.getInt(1)
}
}
So... that's a little clunky, but it's functional. Unfortunately, by controlling the statement myself, I lost all of the parameter mapping that Groovy normally does for me.
I was looking through the groovy Sql source code and can see where they are explicitly telling the database connection not to handle parameters, so I'm thinking I'll add a new method to Sql.metaClass that can pass in a list of the auto-generated column names or something to make this more palatable.
I also want to see if there's a way to get the getGeneratedKeys method working so that I don't have to do all of that casting. At the very least, a utility method to safely handle the casting for me.
try {
withinBatch = true;
PreparedStatement statement = (PreparedStatement) getAbstractStatement(new CreatePreparedStatementCommand(0), connection, sql);
configure(statement);
psWrapper = new BatchingPreparedStatementWrapper(statement, indexPropList, batchSize, LOG, this);
closure.call(psWrapper);
return psWrapper.executeBatch();
} catch (SQLException e) {
The createNewPreparedStatement(0) prevents the creation of a statement which could return the auto-generated keys.

Just to make sure I wasn't crazy, I re-tried the 'getGeneratedKeys' method again with a statement that I know works and I got no results (see below). I had to recursively spin through the results to find the IBM class. So... not my favorite code, it's pretty brittle, but it's functional. Now I just need to see if I can still use the withBatch method somehow, I'll obviously need to override some things.
println 'print using getGeneratedKeys'
def results = preparedStatement.getGeneratedKeys()
while (results.next()) {
println SqlGroovyMethods.toRowResult(results)
}
println 'print using delegate processing'
println getGeneratedKeys(preparedStatement)
private List getGeneratedKeys(PreparedStatement statement) {
switch (statement) {
case DelegatingStatement:
return getGeneratedKeys(DelegatingStatement.cast(statement).getDelegate())
case DB2PreparedStatement:
ResultSet[] resultSets = DB2PreparedStatement.cast(statement).getDBGeneratedKeys()
List keys = []
resultSets.each { ResultSet results ->
while (results.next()) {
keys << SqlGroovyMethods.toRowResult(results)
}
}
return keys
default:
return [SqlGroovyMethods.toRowResult(statement.getGeneratedKeys())]
}
}
---- Console Output ----
print using getGeneratedKeys
print using delegate processing
[[KEY:7391], [KEY:7392]]

Okay, got it working. I had to hack my way into the Groovy SQL class, and there are some things that I just couldn't do because the methods in the Groovy class were private, so this implementation doesn't support cachedStatements, the isWithinBatch method won't operate correctly in the closure, and there's no access to the number of rows that were updated.
It'd be nice to see some variation of this in the base Groovy code, perhaps with a extension point where you put in your own handler (since you wouldn't want the IBM specific stuff in the base Groovy code), but at least I have a workable solution now.
public class SqlWithGeneratedKeys extends Sql {
public SqlWithGeneratedKeys(Sql parent) {
super(parent);
}
public List<GroovyRowResult> withBatch(String pSql, String [] keys, Closure closure) throws SQLException {
return this.withBatch(0, pSql, keys, closure);
}
public List<GroovyRowResult> withBatch(int batchSize, String pSql, String [] keys, Closure closure) throws SQLException {
final Connection connection = this.createConnection();
List<Tuple> indexPropList = null;
final SqlWithParams preCheck = this.buildSqlWithIndexedProps(pSql);
BatchingPreparedStatementWrapper psWrapper = null;
String sql = pSql;
if (preCheck != null) {
indexPropList = new ArrayList<Tuple>();
for (final Object next : preCheck.getParams()) {
indexPropList.add((Tuple) next);
}
sql = preCheck.getSql();
}
PreparedStatement statement = null;
try {
statement = connection.prepareStatement(sql, keys);
this.configure(statement);
psWrapper = new BatchingPreparedStatementWrapper(statement, indexPropList, batchSize, LOG, this);
closure.call(psWrapper);
psWrapper.executeBatch();
return this.getGeneratedKeys(statement);
} catch (final SQLException e) {
LOG.warning("Error during batch execution of '" + sql + "' with message: " + e.getMessage());
throw e;
} finally {
BaseDBServices.closeDBElements(connection, statement, null);
}
}
protected List<GroovyRowResult> getGeneratedKeys(Statement statement) throws SQLException {
if (statement instanceof DelegatingStatement) {
return this.getGeneratedKeys(DelegatingStatement.class.cast(statement).getDelegate());
} else if (statement instanceof DB2PreparedStatement) {
final ResultSet[] resultSets = DB2PreparedStatement.class.cast(statement).getDBGeneratedKeys();
final List<GroovyRowResult> keys = new ArrayList<GroovyRowResult>();
for (final ResultSet results : resultSets) {
while (results.next()) {
keys.add(SqlGroovyMethods.toRowResult(results));
}
}
return keys;
}
return Arrays.asList(SqlGroovyMethods.toRowResult(statement.getGeneratedKeys()));
}
}
Calling it is nice and clean.
println new SqlWithGeneratedKeys(target).withBatch(statement, ['ISN'] as String[]) { ps ->
rows.each {
ps.addBatch(it)
}
}

Related

do we need to synchronise the DB calls if multithreading is involved?

I have a scenario where two threads invoke a method and this method generated a sequence using postgres nextval(test_sequence).
test_sequence is initailly assigned to 1.
public String createNotification() {
logger.info("createNotification ENTRY");
Future<String> futRes = this.threadPool.submit(new Callable<String>() {
#Override
public String call() {
String notificationID = getNotificationId();//DB CALL TO GENERATE THE NEXT SEQUENCE.
boolean isInsertSuccess = notificationDaoService.insertNotificationIntoDB(notificationID);
if (isInsertSuccess == true) {
return notificationID;
} else {
return null;
}
}
});
try {
return futRes.get(5, TimeUnit.SECONDS);
} catch (Exception e) {
logger.error("Issue while getting value from future with exception :", e);
return null;
}
}
So in the above snippet, getNotificationId() will generate the sequence and insertNotificationIntoDB() wil insert the generated notification id to the table.
I some times observing the primary key voilation exception when multiple threads try to invoke createNotification().
So i am thinking to synchronise the db calls as mentioned below,
synchronised(object)
{
String notificationID = getNotificationId();
boolean isInsertSuccess = notificationDaoService.insertNotificationIntoDB(notificationID);
}
is this solution ok?
and also i want to ask if i can generalise that if multiple threads are accessing a function and if that function has DB calls that does basic CRUD, then all the DB calls needs to be synchronised. Is this right inference?

JDBC insertin String,Date and duration values to table

I've been trying to insert String, Date(now) and duration to the table.
Here is my code: At the args section (at the end) "app.insert("name", "date_added","path", "duration");" those values I mentioned as " date_added and duration" it wants me to convert them into String with an error. How can I prevent this happening and let the code enter date(now) and duration to app.insert section?
package xxxxx;
import java.sql.*;
public class SAMPLE_SOUND {
private Connection connect() {
String url = "jdbc:sqlite:neural.db";
Connection conn = null;
try {
conn = DriverManager.getConnection(url);
} catch (SQLException e) {
System.out.println(e.getMessage());
}
return conn;
}
public void insert(String name,Date date_added, String path, Time duration){
String sql= " INSERT INTO SAMPLE_SOUND(name,date_added,path,duration) VALUES(?,?,?,?)";
try(Connection conn = this.connect();
PreparedStatement pstmt = conn.prepareStatement(sql)){
pstmt.setString(1,name);
pstmt.setDate(2, date_added);
pstmt.setString(3,path);
pstmt.setTime(4, duration);
pstmt.executeUpdate();
} catch (SQLException e){
System.out.println(e.getMessage());
}
}
public static void main(String[] args) {
SAMPLE_SOUND app = new SAMPLE_SOUND();
app.insert("name", "date_added","path", "duration");
}
}
it wants me to convert them into String with an error
No, not at all. It signals that you're trying to pass strings as arguments, elthough the method expects a Date and a Time. The correct fix is of course to pass a Date and a Time. But IntelliJ doesn't know that, and it suggests that maybe you actually do want to pass Strings, and the arguments types should then be changed to String.
Remember, you are the smart developer. It's up to you to decide what to do.
Please respect the Java naming conventions.

ConcurrentHashMap remove issue

I have a class like this
import java.util.concurrent.*;
public class TestClass<V> {
private final ConcurrentMap<String, Future<V>> requests;
private final ExecutorService executorService;
public TestClass(final ExecutorService executorService) {
this.executorService = executorService;
this.requests = new ConcurrentHashMap<>();
}
public V submitRequest(String cacheKey, Callable<V> request) throws Exception {
final Future<V> task = getOrCreateTask(cacheKey, request);
final V results;
try {
results = task.get();
} catch (InterruptedException | ExecutionException e) {
throw new IllegalStateException(String.format("Exception while executing request for key '%s'", cacheKey),
e);
} finally {
//Nullpointer here
requests.remove(cacheKey);
}
return results;
}
private synchronized Future<V> getOrCreateTask(String key, Callable<V> request) {
if (requests.containsKey(key)) {
return requests.get(key);
} else {
final Future<V> newTask = executorService.submit(request);
requests.put(key, newTask);
return newTask;
}
}
}
but sometimes under heavy load server throws nullpointer on requests.remove(cacheKey). I have read final when not escaped by this in the constructor is write guaranteed. i.e. other threads can see what is going on with my requests map.
Not sure how do i fix efficiently? Does not like that idea of adding synchronised on the whole parent level method
I'm not actually sure the NPE is where you're identifying it is unless cacheKey is null, which you could check for. The concurrentmap is set correctly so the requests field should never be null. Nevertheless, this code is not correctly synchronized. You are attempting to perform two operations in getOrCreateTask() that while under the synchronized keyword are not correctly synchronized with the map because the map is interacted with in submitRequest when you remove the values.
What is likely happening is that between the check ConcurrentMap#containsKey and ConcurrentMap#get that another thread has removed the value from the cache (ConcurrentMap#remove).
Thread A: Check Contains "foobar" => true
Thread B: Remove "foobar"
Thread A: Call get("foobar") => null
Thread A: Calls Future#get on a null pointer, which then throws a NPE.
Since you control the concurrentmap you can know you'll never have null values. In that case you should instead just call the #get method and check if the returned value is null. This will prevent another thread from removing the value between a contains/get pair since you'll be only accessing the map through one atomic operation.

Filtering out soft deletes with AutoQuery

I'm using ServiceStack with OrmLite, and having great success with it so far. I'm looking for a way to filter out 'soft deleted' records when using AutoQuery. I've seen this suggestion to use a SqlExpression, but I'm not sure where you would place that. In the AppHost when the application starts? I did that, but the deleted records still return. My QueryDb request object in this case is as follows:
public class QueryableStore : QueryDb<StoreDto>
{
}
Other SqlExpressions I've used are in the repository class itself, but being that I'm using QueryDb and only the message itself (not leveraging my repository class) I don't have any other code in place to handle these messages and filter out the 'deleted' ones.
I've also tried using a custom service base as suggested by this approach as well, using the following:
public abstract class MyCustomServiceBase : AutoQueryServiceBase
{
private const string IsDeleted = "F_isdeleted";
public override object Exec<From>(IQueryDb<From> dto)
{
var q = AutoQuery.CreateQuery(dto, Request);
q.And("{0} = {1}", IsDeleted, 0);
return AutoQuery.Execute(dto, q);
}
public override object Exec<From, Into>(IQueryDb<From, Into> dto)
{
var q = AutoQuery.CreateQuery(dto, Request);
q.And("{0} = {1}", IsDeleted, 0);
return AutoQuery.Execute(dto, q);
}
}
This code gets called, but when the Execute call happens I get an error:
System.ArgumentException: 'Conversion failed when converting the varchar value 'F_isdeleted' to data type int.'
The F_isdeleted column is a 'bit' in SQL Server, and represented as a bool in my POCO.
Any ideas on what would work here? I'm kind of at a loss that this seems this difficult to do, yet the docs make it look pretty simple.
The {0} are placeholders for db parameters, so your SQL should only be using placeholders for DB parameters, e.g:
var q = AutoQuery.CreateQuery(dto, Request);
q.And(IsDeleted + " = {0}", false);
Otherwise if you want to use SQL Server-specific syntax you can use:
q.And(IsDeleted + " = 0");

Google Custom Search API - Search Results

I have somewhat lost touch with custom search engines ever since Google switched from its more legacy search engine api in favor of the google custom search api. I'm hoping someone might be able to tell me whether a (pretty simple) goal can be accomplished with the new framework, and potentially any starting help would be great.
Specifically, I am looking to write a program which will read in text from a text file, then use five words from said document in a google search - the point being to figure out how many results accrue from said search.
An example input/output would be:
Input: "This is my search term" -- quotations included in the search!
Output: there were 7 total results
Thanks so much, all, for your time/help
First you need to create a Google Custom Search project inside you google account.
From this project you must obtain a Custom Search Engine ID , known as cx parameter. You must also obtain a API key parameter. Both of these are available from your Google Custom Search API project inside your google account.
Then, if you prefer Java , here's a working example:
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
public class GoogleCustonSearchAPI {
public static void main(String[] args) throws Exception {
String key="your_key";
String qry="your_query";
String cx = "your_cx";
//Fetch urls
URL url = new URL(
"https://www.googleapis.com/customsearch/v1?key="+key+"&cx="+cx+"&q="+ qry +"&alt=json&queriefields=queries(request(totalResults))");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
conn.setRequestProperty("Accept", "application/json");
BufferedReader br = new BufferedReader(new InputStreamReader(
(conn.getInputStream())));
//Remove comments if you need to output in JSON format
/*String output;
System.out.println("Output from Server .... \n");
while ((output = br.readLine()) != null) {
System.out.println(output);
}*/
//Print the urls and domains from Google Custom Search String searchResult;
while ((searchResult = output.readLine()) != null) {
int startPos=searchResult.indexOf("\"link\": \"")+("\"link\": \"").length();
int endPos=searchResult.indexOf("\",");
if(searchResult.contains("\"link\": \"") && (endPos>startPos)){
String link=searchResult.substring(startPos,endPos);
if(link.contains(",")){
String tempLink = "\"";
tempLink+=link;
tempLink+="\"";
System.out.println(tempLink);
}
else{
System.out.println(link);
}
System.out.println(getDomainName(link));
}
}
conn.disconnect();
}
public static String getDomainName(String url) throws URISyntaxException {
URI uri = new URI(url);
String domain = uri.getHost();
return domain.startsWith("www.") ? domain.substring(4) : domain;
}
The "&queriefields=queries(request(totalResults))" is what makes the difference and gives sou what you need. But keep in mind that you can perform only 100 queries per day for free and that the results of Custom Search API are sometimes quite different from the those returned from Google.com search
If anybody would still need some example of CSE (Google Custom Search Engine) API, this is working method
public static List<Result> search(String keyword){
Customsearch customsearch= null;
try {
customsearch = new Customsearch(new NetHttpTransport(),new JacksonFactory(), new HttpRequestInitializer() {
public void initialize(HttpRequest httpRequest) {
try {
// set connect and read timeouts
httpRequest.setConnectTimeout(HTTP_REQUEST_TIMEOUT);
httpRequest.setReadTimeout(HTTP_REQUEST_TIMEOUT);
} catch (Exception ex) {
ex.printStackTrace();
}
}
});
} catch (Exception e) {
e.printStackTrace();
}
List<Result> resultList=null;
try {
Customsearch.Cse.List list=customsearch.cse().list(keyword);
list.setKey(GOOGLE_API_KEY);
list.setCx(SEARCH_ENGINE_ID);
Search results=list.execute();
resultList=results.getItems();
}
catch ( Exception e) {
e.printStackTrace();
}
return resultList;
}
This method returns List of Result Objects, so you can iterate through it
List<Result> results = new ArrayList<>();
try {
results = search(QUERY);
} catch (Exception e) {
e.printStackTrace();
}
for(Result result : results){
System.out.println(result.getDisplayLink());
System.out.println(result.getTitle());
// all attributes
System.out.println(result.toString());
}
I use gradle dependencies
dependencies {
compile 'com.google.apis:google-api-services-customsearch:v1-rev57-1.23.0'
}
Don't forget to define your own GOOGLE_API_KEY, SEARCH_ENGINE_ID (cx), QUERY and HTTP_REQUEST_TIMEOUT (ie private static final int HTTP_REQUEST_TIMEOUT = 3 * 600000;)

Resources