Express Complex Sub-Select With Dynamic Query - liferay

How would you replicate this SQL ( Sub-select ) with an equivalent Liferay DynamicQuery expression within a ServiceImpl Class:
SELECT * FROM journalarticle
WHERE (urlTitle,version) IN
( SELECT
urlTitle,MAX(version)
FROM journalarticle
WHERE structureId = 'structure-id' AND companyId = 10150 AND groupId = 10170
GROUP BY urlTitle )
ORDER BY createDate DESC
LIMIT 0,4

As I could not write too long comment , I am pasting code here which would result in what you want, I havent compiled or run it.
DynamicQuery dynamicQuery=DynamicQueryFactoryUtil.forClass(JournalArticle.class);
dynamicQuery.addOrder(OrderFactoryUtil.desc("createDate"));
dynamicQuery.setLimit(0, 4);
DynamicQuery subQuery=DynamicQueryFactoryUtil.forClass(JournalArticle.class);
subQuery.setProjection(ProjectionFactoryUtil.projectionList().add(ProjectionFactoryUtil.property("_id")).add(ProjectionFactoryUtil.max("version")));
subQuery.add(PropertyFactoryUtil.forName("structureId ").eq("structure-id"));
subQuery.add(PropertyFactoryUtil.forName("companyId").eq("10150"));
subQuery.add(PropertyFactoryUtil.forName("groupId").eq("10170"));
List<Long> ids=new ArrayList<Long>();
try {
List<Object[]> list= JournalArticleLocalServiceUtil.dynamicQuery(subQuery);
for(Object[] object:list){
//0th field would be _id
ids.add((Long)object[0]);
}
} catch (SystemException e) {
// TODO Auto-generated catch block
}
dynamicQuery.add(PropertyFactoryUtil.forName("_id").in(ids.toArray()));
try {
List<JournalArticle> journalArticles=JournalArticleLocalServiceUtil.dynamicQuery(dynamicQuery);
} catch (SystemException e) {
// TODO Auto-generated catch block
}
I hope this might be useful to you.

Related

Unexpected double WHERE clause in Servicestack OrmLite

We have an issue that occurs at every method call for limited periods of time. Then it works as expected. The issue is that the code produces double WHERE clauses.
We're using Servicestack 4.5.14
The method we have:
protected static void InsertOrUpdate<T>(
IDbConnection connection,
T item,
Expression<Func<T, bool>> singleItemPredicate,
Expression<Func<T, object>> updateOnlyFields = null)
{
var type = item.GetType();
var idProperty = type.GetProperty("Id");
if (idProperty == null)
{
throw new Exception("Cannot insert or update on a class with no ID property");
}
var currentId = (int)idProperty.GetValue(item);
if (currentId != 0)
{
throw new Exception("Cannot insert or update with non-zero ID");
}
var query = connection.From<T>().Where(singleItemPredicate).WithSqlFilter(WithUpdateLock);
T existingItem;
try
{
existingItem = connection.Select(query).SingleOrDefault();
Log.Verbose(connection.GetLastSql);
}
catch (SqlException)
{
Log.Verbose(connection.GetLastSql);
throw;
}
if (existingItem == null)
{
Insert(connection, item);
return;
}
var existingId = (int)idProperty.GetValue(existingItem);
idProperty.SetValue(item, existingId);
try
{
var affectedRowCount = connection.UpdateOnly(item, onlyFields: updateOnlyFields, where: singleItemPredicate);
Log.Verbose(connection.GetLastSql);
if (affectedRowCount != 1)
{
throw new SwToolsException("Update failed");
}
}
catch (SqlException)
{
Log.Verbose(connection.GetLastSql);
throw;
}
}
When it all works, an example output from the logs could be:
SELECT "Id", "Application", "Hostname", "LastContact", "Version", "ToolState", "ServerState"
FROM "ca"."ExecutionHost"
WITH (UPDLOCK) WHERE ("Hostname" = #0)
UPDATE "ca"."ExecutionHost" SET "LastContact"=#LastContact, "Version"=#Version, "ToolState"=#ToolState, "ServerState"=#ServerState WHERE ("Hostname" = #0)
When it fails, the output (same session, only seconds later) was:
SELECT "Id", "Application", "Hostname", "LastContact", "Version", "ToolState", "ServerState"
FROM "ca"."ExecutionHost"
WITH (UPDLOCK) WHERE ("Hostname" = #0)
UPDATE "ca"."ExecutionHost" SET "LastContact"=#LastContact, "Version"=#Version, "ToolState"=#ToolState, "ServerState"=#ServerState WHERE "LastContact"=#LastContact, "Version"=#Version, "ToolState"=#ToolState, "ServerState"=#ServerState WHERE ("Hostname" = #0)
Marked in bold is the addition to the SQL that makes the call to fail. It seems that it adds an additional WHERE clause with the content from the SET clause.
We've been debugging this for a while and don't really know if the issue is on "our" side or in Servicestack.
Any ideas on where to continue?

Azure Batch Insert: Bad Request Error

I am getting below error while trying to insert multiple entities in Azure Table storage:
com.microsoft.azure.storage.table.TableServiceException: Bad Request
at com.microsoft.azure.storage.table.TableBatchOperation$1.postProcessResponse(TableBatchOperation.java:525)
at com.microsoft.azure.storage.table.TableBatchOperation$1.postProcessResponse(TableBatchOperation.java:433)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:146)
Below is the Java code for batch insert:
public BatchInsertResponse batchInsert(BatchInsertRequest request){
BatchInsertResponse response = new BatchInsertResponse();
String erpName = request.getErpName();
HashMap<String,List<TableEntity>> tableNameToEntityMap = request.getTableNameToEntityMap();
HashMap<String,List<TableEntity>> errorMap = new HashMap<String,List<TableEntity>>();
HashMap<String,List<TableEntity>> successMap = new HashMap<String,List<TableEntity>>();;
CloudTable cloudTable=null;
for (Map.Entry<String, List<TableEntity>> entry : tableNameToEntityMap.entrySet()){
try {
cloudTable = azureStorage.getTable(entry.getKey());
} catch (Exception e) {
e.printStackTrace();
}
// Define a batch operation.
TableBatchOperation batchOperation = new TableBatchOperation();
List<TableEntity> value = entry.getValue();
for (int i = 0; i < value.size(); i++) {
TableEntity entity = value.get(i) ;
batchOperation.insertOrReplace(entity);
if (i!=0 && i % batchSize == 0) {
try {
cloudTable.execute(batchOperation);
batchOperation.clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
try {
cloudTable.execute(batchOperation);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Above code is working fine if I will assign batchSize value to 10 but if I will assign to 1000 or 100 it will throw Bad request error.
Please help me to resolve this error. I am using Spring boot and Azure-storage Java SDK version 4.3.0.
As Aravind mentioned, 400 error usually means there's something wrong with your data. From this link, an entity batch transaction will fail if one or more of the following conditions are not met:
All entities subject to operations as part of the transaction must have the same PartitionKey value.
An entity can appear only once in the transaction, and only one operation may be performed against it.
The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size.
All entities are subject to the limitations described in Understanding the Table Service Data Model.
Please check your entities against these four rules and ensure that you're not violating one of the rules.

Duplicate entry '0' for key 'PRIMARY' while adding data to MYSQL([JDBCExceptionReporter:76])

getting following exception while inserting 2nd time in database from my liferay portlet.
[JDBCExceptionReporter:76] Duplicate entry '0' for key 'PRIMARY'.(i think its because my primary key value not getting auto increment)
I think have done mistake while auto incrementing the primary key in my custom portlet .but i don't know where i have to make changes for that.
if anyone can guide me about to where make the changes to solve this auto increment issue?
this is the code from auto increment been set
try {
restVar = restaurantPersistence.create(counterLocalService
.increment(restaurant.class.toString()));
} catch (SystemException e) {
e.printStackTrace();
return restVar = null;
}
try {
resourceLocalService.addResources(0,restParam.getGroupId(), restParam.getUserId(),
restaurant.class.getName(),restParam.getPrimaryKey(), false,true,true);
} catch (PortalException e) {
e.printStackTrace();
return restVar = null;
} catch (SystemException e) {
e.printStackTrace();
return restVar = null;
}
Try this one..
long primaryKeyId = CounterLocalServiceUtil.increment(ClassName.class.getName());
XYZDetails XYZDetails = XYZDetailsLocalServiceUtil.createXYZDetails(primaryKeyId);
Add other details using XYZDetails Obj
e.g
XYZDetails.setName("Name");
Then Save the Details..
XYZDetailsLocalServiceUtil.addXYZDetails(XYZDetails);
Hope this may help you !!!

Non-unique ldap attribute name with Unboundit LDAP SDK

I am attempting to retrieve objects having several attributes with the name from netscape LDAP directory with LDAP SDK from Unboundit. The problem is that only one of the attributes are returned - I am guessing LDAP SDK relays heavily on unique attribute names, is there a way to configure it to also return non-distinct attributes as well?
#Test
public void testRetrievingUsingListener() throws LDAPException {
long currentTimeMillis = System.currentTimeMillis();
LDAPConnection connection = new LDAPConnection("xxx.xxx.xxx", 389,
"uid=xxx-websrv,ou=xxxx,dc=xxx,dc=no",
"xxxx");
SearchRequest searchRequest = new SearchRequest(
"ou=xxx,ou=xx,dc=xx,dc=xx",
SearchScope.SUB, "(uid=xxx)", SearchRequest.ALL_USER_ATTRIBUTES );
LDAPEntrySource entrySource = new LDAPEntrySource(connection,
searchRequest, true);
try {
while (true) {
try {
System.out.println("*******************************************");
Entry entry = entrySource.nextEntry();
if (entry == null) {
// There are no more entries to be read.
break;
} else {
Collection<Attribute> attributes = entry.getAttributes();
for (Attribute attr : attributes)
{
System.out.println (attr.getName() + " " + attr.getValue());
}
}
} catch (SearchResultReferenceEntrySourceException e) {
// The directory server returned a search result reference.
SearchResultReference searchReference = e
.getSearchReference();
} catch (EntrySourceException e) {
// Some kind of problem was encountered (e.g., the
// connection is no
// longer valid). See if we can continue reading entries.
if (!e.mayContinueReading()) {
break;
}
}
}
} finally {
entrySource.close();
}
System.out.println("Finished in " + (System.currentTimeMillis() - currentTimeMillis));
}
Non-unique LDAP attributes are considered multivalued and are reperesented as String array.
Use Attribute.getValues() instead of attribute.getValue.

Apache-thrift call to cassandra inserts junk/null values in the "key"

cassandra-thrift-1.1.2.jar
Problem code:
ColumnOrSuperColumn cosc = null;
org.apache.cassandra.thrift.Column c = new org.apache.cassandra.thrift.Column ();
c.setName ("full_name".getBytes ("UTF-8"));
c.setValue ("Test name".getBytes ("UTF-8"));
c.setTimestamp (System.currentTimeMillis());
// insert data
// long timestamp = System.currentTimeMillis();
try {
client.set_keyspace("CClient");
bb=ByteBuffer.allocate (10);
client.insert (bb.putInt(1),
new ColumnParent ("users"),
c,
ConsistencyLevel.QUORUM);
bb.putInt (2);
cosc = client.get (bb, cp, ConsistencyLevel.QUORUM);
}
catch (TimedOutException toe) {
System.out.println (toe.getMessage());
}
catch (org.apache.cassandra.thrift.UnavailableException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
catch (Exception e) {
e.printStackTrace();
}
finally {
System.out.println (new String (cosc.getColumn().getName()) + "-" + new String (cosc.getColumn().getValue()));
}
The code shown above inserts some junk or null into the database, I don't understand the reason why?
See how it looks on the CLI:
RowKey:
=> (column=full_name, value=Test name, timestamp=1345743985973)
Any help in this is greatly appreciated.
Thanks.
You're creating a row with row key as bytes.
In Cassandra cli you'll probably see the row key if you list the rows as bytes.
E.g. in cassandra cli type:
assume users keys as bytes;
list users;

Resources