How to add queries by search criteria? - jhipster

I am new to jhipster.
Is there a guide of steps to follow to implement search criteria?
I want to add to an entity, the possibility of filtering by any of its fields.
I have enabled filtering, but can't find a mode to show up on the entity's CRUD page.
I understand that I have to code in different places in the application ... but I did not find any examples or guides.
In advance, thank you very much for giving me some help

I don't know any guide for that but I also use Jhipster and developed my app search criteria and works fine. I hope this is what you're looking for.
I gonna use a comment table as an example where I want to filter by hidden, flagged, or both with the possibility of pagination on front. So I created a CommentRepositoryCustom.java:
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
public interface CommentRepositoryCustom {
Page<Comment> findCommentByCommentHiddenAndFlagged(Boolean hidden, Boolean flagged, Pageable pageable);
}
After that I created the implementation for my findCommentByCommentHiddenAndFlagged in another custom repository I created called CommentRepositoryCustomImpl.java:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageImpl;
import org.springframework.data.domain.Pageable;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import javax.persistence.EntityManager;
import javax.persistence.TypedQuery;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Predicate;
import javax.persistence.criteria.Root;
#Repository
#Transactional
public class CommentRepositoryCustomImpl implements CommentRepositoryCustom{
private static final String HIDDEN_LABEL = "hidden";
private static final String FLAGGED_LABEL = "flagged";
#Autowired
private EntityManager em;
#Override
public Page<Comment> findCommentByCommentHiddenAndFlagged(Boolean hidden, Boolean flagged, Pageable pageable) {
CriteriaBuilder criteriaBuilder = em.getCriteriaBuilder();
CriteriaQuery<Comment> commentCriteriaQuery = criteriaBuilder.createQuery(Comment.class);
Root<Comment> commentRoot = commentCriteriaQuery.from(Comment.class);
Predicate predicateHidden = criteriaBuilder.equal(commentRoot.get(HIDDEN_LABEL), hidden);
Predicate predicateFlagged = criteriaBuilder.equal(commentRoot.get(FLAGGED_LABEL), flagged);
Predicate concatenate;
TypedQuery<Comment> typedQuery;
CriteriaQuery<Long> countQuery = criteriaBuilder.createQuery(Long.class);
Root<Comment> countCommentRoot = countQuery.from(Comment.class);
Long count;
if (hidden != null && flagged != null) {
concatenate = criteriaBuilder.and(predicateHidden);
concatenate = criteriaBuilder.and(concatenate, predicateFlagged);
countQuery.select(criteriaBuilder.count(countCommentRoot)).where(concatenate);
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot).where(concatenate));
} else if(hidden != null) {
concatenate = criteriaBuilder.and(predicateHidden);
countQuery.select(criteriaBuilder.count(countCommentRoot)).where(concatenate);
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot).where(concatenate));
} else if (flagged != null) {
concatenate = criteriaBuilder.and(predicateFlagged);
countQuery.select(criteriaBuilder.count(countCommentRoot)).where(concatenate);
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot).where(concatenate));
} else {
countQuery.select(criteriaBuilder.count(countCommentRoot));
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot));
}
typedQuery.setFirstResult((int) pageable.getOffset()).setMaxResults(pageable.getPageSize());
count = em.createQuery(countQuery).getSingleResult();
return new PageImpl<>(typedQuery.getResultList(), pageable, count);
}
}
Now you just need to call findCommentByCommentHiddenAndFlagged with the right params in your service.
Let me know if this what you're looking for.

Do you have a Get endpoint in package web.rest that looks like this? #GetMapping("/clcomments") public ResponseEntity<List<Entity>> getAllComments(CommentsCriteria criteria) {... }
If you do, search by criteria works!

Related

Transform PCollection<KV> to custom class

My goal is to read a file from GCS and write it to Cassandra.
New to Apache Beam/Dataflow, I could find most of the hand on build with Python. Unfortunately CassandraIO is only Java native with Beam.
I used the word count example as a template and try to get rid of the TextIO.write() and replace it with a CassandraIO.<Words>write().
Here my java class for the Cassandra table
package org.apache.beam.examples;
import java.io.Serializable;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace = "test", name = "words", readConsistency = "ONE", writeConsistency = "QUORUM",
caseSensitiveKeyspace = false, caseSensitiveTable = false)
public class Words implements Serializable {
// private static final long serialVersionUID = 1L;
#PartitionKey
#Column(name = "word")
public String word;
#Column(name = "count")
public long count;
public Words() {
}
public Words(String word, int count) {
this.word = word;
this.count = count;
}
#Override
public boolean equals(Object obj) {
Words other = (Words) obj;
return this.word.equals(other.word) && this.count == other.count;
}
}
And here the pipeline part of the main code.
static void runWordCount(WordCount.WordCountOptions options) {
Pipeline p = Pipeline.create(options);
// Concepts #2 and #3: Our pipeline applies the composite CountWords transform, and passes the
// static FormatAsTextFn() to the ParDo transform.
p.apply("ReadLines", TextIO.read().from(options.getInputFile()))
.apply(new WordCountToCassandra.CountWords())
// Here I'm not sure how to transform PCollection<KV> into PCollection<Words>
.apply(MapElements.into(TypeDescriptor.of(Words.class)).via(PCollection<KV<String, Long>>)
}))
.apply(CassandraIO.<Words>write()
.withHosts(Collections.singletonList("my_ip"))
.withPort(9142)
.withKeyspace("test")
.withEntity(Words.class));
p.run().waitUntilFinish();
}
My understand is to use a PTransform to pass from PCollection<T1> from PCollection<T2>. I don't know how to map that.
If it's 1:1 mapping, MapElements.into is the right choice.
You can either specify a class that implements SerializableFunction<FromType, ToType>, or simply use a lambda, for example:
.apply(MapElements.into(TypeDescriptor.of(Words.class)).via(kv -> new Words(kv.getKey(), kv.getValue()));
Please check MapElements for more information.
If the transformation is not one-to-one, there are other available options such as FlatMapElements or ParDo.

Liferay migrating data into Liferay table from some legacy database

I am looking to migrate data from one of the legacy database to one of Liferay tables. I can write the migration script, but I was wondering if there will be some issue with "Counter" service of liferay.
For example I have legacy custom user table. I need to move the users inside this table to Liferay's User_ table. I can use sql script to move the data. I am wondering what happens with primary key. As far as I know Liferay has counter service to create primary key and keep track of current id.
So while migrating is there anything that I need to do so that the Counter is not messed up after migration.
There are more issues than just the counter. You should strictly use Liferay's API to import the content.
I could name a few potential issues to pay attention to, but this will probably miss a few more - however it'd make you (or others reading this answer) confident that they now can cope with all issues. You can't. Don't go there. Simply use the API and it will take care of updating all the required dependencies. Plus, the API makes obvious what other data you still need in conjunction with your imported data.
Anybody who describes with more detail will set you up for disaster. And we've seen people who discovered their disasters more than 6 months after their manual writing to the database (in a particular case with running into duplicate primary keys, e.g. specifically the counter that you mention). Any of the possible failures might bring down your portal from one second to the other when you've long forgotten that you (or someone else) manually wrote to the database.
You could migrate data from a legacy database with Pentaho Data Integration by calling a REST endpoint (defined through {MODEL_NAME}ServiceImpl.java) that returns a counter increment of the entity I want to insert the data to. You would just have to respect using the same counter for each entity that is getting the new id's.
You can loop from list of legacy user, and do create Liferay users by using this class:
package com.yourcompany.yourapp.util;
import java.util.Date;
import java.util.List;
import org.apache.log4j.Logger;
import com.liferay.counter.service.CounterLocalServiceUtil;
import com.liferay.portal.kernel.dao.orm.DynamicQuery;
import com.liferay.portal.kernel.dao.orm.DynamicQueryFactoryUtil;
import com.liferay.portal.kernel.dao.orm.OrderFactoryUtil;
import com.liferay.portal.kernel.dao.orm.PropertyFactoryUtil;
import com.liferay.portal.kernel.exception.PortalException;
import com.liferay.portal.kernel.exception.SystemException;
import com.liferay.portal.kernel.uuid.PortalUUIDUtil;
import com.liferay.portal.kernel.workflow.WorkflowConstants;
import com.liferay.portal.model.Account;
import com.liferay.portal.model.ClassName;
import com.liferay.portal.model.Company;
import com.liferay.portal.model.Contact;
import com.liferay.portal.model.Group;
import com.liferay.portal.model.LayoutSet;
import com.liferay.portal.model.User;
import com.liferay.portal.security.permission.PermissionChecker;
import com.liferay.portal.security.permission.PermissionCheckerFactoryUtil;
import com.liferay.portal.security.permission.PermissionThreadLocal;
import com.liferay.portal.service.AccountLocalServiceUtil;
import com.liferay.portal.service.ClassNameLocalServiceUtil;
import com.liferay.portal.service.CompanyLocalServiceUtil;
import com.liferay.portal.service.ContactLocalServiceUtil;
import com.liferay.portal.service.GroupLocalServiceUtil;
import com.liferay.portal.service.LayoutSetLocalServiceUtil;
import com.liferay.portal.service.RoleLocalServiceUtil;
import com.liferay.portal.service.UserLocalServiceUtil;
import com.liferay.portlet.asset.model.AssetEntry;
import com.liferay.portlet.asset.service.AssetEntryLocalServiceUtil;
public class UserUtil {
private static final Logger logger = Logger.getLogger(UserUtil.class);
private long companyId;
private long creatorUserId;
private long accountId;
private Date date;
public UserUtil() {
try {
DynamicQuery queryCompany = DynamicQueryFactoryUtil.forClass(Company.class)
.addOrder(OrderFactoryUtil.asc("companyId"));
List<Company> listCompany = (List<Company>) CompanyLocalServiceUtil.dynamicQuery(queryCompany, 0, 1);
companyId = listCompany.get(0).getCompanyId();
//-----------
DynamicQuery queryAccount = DynamicQueryFactoryUtil.forClass(Account.class)
.addOrder(OrderFactoryUtil.asc("accountId"));
List<Account> listAccount = (List<Account>) AccountLocalServiceUtil.dynamicQuery(queryAccount, 0, 1);
accountId = listAccount.get(0).getAccountId();
//-----------
DynamicQuery queryUser = DynamicQueryFactoryUtil.forClass(User.class)
.add(PropertyFactoryUtil.forName("defaultUser").eq(false))
.addOrder(OrderFactoryUtil.asc("createDate"));
List<User> listUser = (List<User>) UserLocalServiceUtil.dynamicQuery(queryUser, 0, 1);
creatorUserId = listUser.get(0).getUserId();
date = new Date();
} catch (SystemException ex) {
logger.error(ex.getMessage());
}
}
public void create(String screenName, String emailAddress, String hashedPassword, String fullName) {
try {
long contactId = CounterLocalServiceUtil.increment();//or use Contact.class.getName() as param
Contact contact = ContactLocalServiceUtil.createContact(contactId);
contact.setAccountId(accountId);
//contact.setBirthday(DateUtil.getDate("dd MM yyyy", "01 11 1986"));
contact.setCachedModel(true);
contact.setCompanyId(companyId);
contact.setCreateDate(date);
contact.setEmailAddress(emailAddress);
//contact.setEmployeeNumber(employeeNumber);
//contact.setEmployeeStatusId(employeeStatusId);
contact.setFirstName(fullName);
contact.setMale(true);
contact.setNew(true);
//contact.setUserId(creatorUserId);
User creatorUser = UserLocalServiceUtil.getUserById(creatorUserId);
contact.setUserName(creatorUser.getFullName());
contact.setUserUuid(creatorUser.getUuid());
ContactLocalServiceUtil.addContact(contact);
//----------------------
long userId = CounterLocalServiceUtil.increment();//or use User.class.getName() as param
//----------------------
User user = UserLocalServiceUtil.createUser(userId);
user.setAgreedToTermsOfUse(true);
user.setCachedModel(true);
user.setCompanyId(companyId);
user.setContactId(contactId);
user.setCreateDate(date);
user.setDefaultUser(false);
user.setDigest(null);
user.setEmailAddress(emailAddress);
user.setEmailAddressVerified(true);
user.setFirstName(fullName);
user.setGreeting("Hi " + user.getFirstName());
user.setLanguageId("en_US");
user.setModifiedDate(date);
user.setNew(true);
user.setPassword(hashedPassword);
user.setPasswordEncrypted(true);
user.setPasswordReset(false);
//user.setPasswordUnencrypted();
user.setScreenName(screenName);
user.setStatus(WorkflowConstants.STATUS_APPROVED);
user.setTimeZoneId("UTC+7");
user.setUserUuid(creatorUser.getUuid());
user.setUuid(PortalUUIDUtil.generate());
UserLocalServiceUtil.addUser(user);
//----------------------
try {
// to avoid "PermissionChecker not Initialized"
PermissionChecker checker = PermissionCheckerFactoryUtil.create(creatorUser);
PermissionThreadLocal.setPermissionChecker(checker);
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
//----------------------
ClassName clsNameUser = ClassNameLocalServiceUtil.getClassName(Constants.USER_CLASS);
long classNameId = clsNameUser.getClassNameId();
long groupId = CounterLocalServiceUtil.increment();// or use Group.class.getName() as param
Group group = GroupLocalServiceUtil.createGroup(groupId);
group.setActive(true);
group.setCachedModel(true);
group.setClassNameId(classNameId);
group.setClassPK(userId);
group.setCompanyId(companyId);
group.setCreatorUserId(creatorUser.getUserId());
group.setCreatorUserUuid(creatorUser.getUuid());
group.setFriendlyURL(String.valueOf(userId));
group.setName(String.valueOf(userId));
group.setNew(true);
group.setSite(false);
group.setTreePath("/" + groupId + "/");
group.setType(0);
group.setUuid(PortalUUIDUtil.generate());
GroupLocalServiceUtil.addGroup(group);
//-----------------------------
long layoutSetId1 = CounterLocalServiceUtil.increment();//or use LayoutSet.class.getName() as param
LayoutSet layoutSet1 = LayoutSetLocalServiceUtil.createLayoutSet(layoutSetId1);
layoutSet1.setCachedModel(true);
//layoutSet.setColorSchemeId(colorSchemeId);
layoutSet1.setCompanyId(companyId);
layoutSet1.setCreateDate(date);
//layoutSet.setCss(css);
layoutSet1.setGroupId(groupId);
//layoutSet.setLogo(logo);
//layoutSet.setLogoId(logoId);
layoutSet1.setModifiedDate(date);
layoutSet1.setNew(true);
layoutSet1.setPrivateLayout(true);
//layoutSet.setThemeId(themeId);
LayoutSetLocalServiceUtil.addLayoutSet(layoutSet1);
//-----------------------------
long layoutSetId2 = CounterLocalServiceUtil.increment();// or use LayoutSet.class.getName() as param
LayoutSet layoutSet2 = LayoutSetLocalServiceUtil.getLayoutSet(layoutSetId1);
layoutSet2.setLayoutSetId(layoutSetId2);
layoutSet2.setPrivateLayout(false);
LayoutSetLocalServiceUtil.addLayoutSet(layoutSet2);
//-----------------------------
long assetEntryId = CounterLocalServiceUtil.increment();//or use AssetEntry.class.getName() as param
AssetEntry assetEntry = AssetEntryLocalServiceUtil.createAssetEntry(assetEntryId);
assetEntry.setCompanyId(companyId);
assetEntry.setClassPK(userId);
assetEntry.setGroupId(groupId);
assetEntry.setClassNameId(classNameId);
//ae.setTitle(title);
assetEntry.setUserId(userId);
AssetEntryLocalServiceUtil.addAssetEntry(assetEntry);
//--------------------------------------------------
//long orgAdminRoleId = RoleLocalServiceUtil.getRole(companyId, Constants.ORG_ADMIN_ROLE_NAME).getRoleId();
//UserGroupRoleLocalServiceUtil.addUserGroupRoles(userId, groupId, new long[] { orgAdminRoleId });
long orgUserRoleId = RoleLocalServiceUtil.getRole(companyId, Constants.ORG_USER_ROLE_NAME).getRoleId();
RoleLocalServiceUtil.addUserRole(userId, orgUserRoleId);
long siteMemberRoleId = RoleLocalServiceUtil.getRole(companyId, Constants.SITE_MEMBER_ROLE_NAME).getRoleId();
RoleLocalServiceUtil.addUserRole(userId, siteMemberRoleId);
//-----------------------------------------------------------
} catch (SystemException | PortalException ex) {
logger.error(ex.getMessage(), ex);
}
}
}
then you can create new instance of UserUtil and call method create() inside your loop of legacy user list, something like this:
UserUtil userUtil = new UserUtil();
for (LegacyUser user : listOfLegacyUser) {
userUtil.create(........);
}
note that hashedPassword depends on what your hash method is, defined in portal-ext.properties, default value is:
passwords.ecnryption.algorithm=PBKDF2WithHmacSHA1/160/128000
but you can use one of values below:
passwords.encryption.algorithm=BCRYPT/10
passwords.encryption.algorithm=MD2
passwords.encryption.algorithm=MD5
passwords.encryption.algorithm=NONE
passwords.encryption.algorithm=PBKDF2WithHmacSHA1/160/128000
passwords.encryption.algorithm=SHA
passwords.encryption.algorithm=SHA-256
passwords.encryption.algorithm=SHA-384
passwords.encryption.algorithm=SSHA
passwords.encryption.algorithm=UFC-CRYPT

Lucene query with numeric field does not find anything

I try to understand how the lucene query syntax works so I wrote this small program.
When using a NumericRangeQuery I can find the documents I want but when trying to parse a search condition, it can't find any hits, although I'm using the same conditions.
i understand the difference can be explained by the analyzer but the StandardAnalyzer is used which does not remove numeric values.
Can someone tell me what I'm doing wrong ?
Thanks.
package org.burre.lucene.matching;
import java.io.IOException;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.*;
import org.apache.lucene.index.*;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.NumericRangeQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.store.*;
import org.apache.lucene.util.Version;
public class SmallestEngine {
private static final Version VERSION=Version.LUCENE_48;
private StandardAnalyzer analyzer = new StandardAnalyzer(VERSION);
private Directory index = new RAMDirectory();
private Document buildDoc(String name, int beds) {
Document doc = new Document();
doc.add(new StringField("name", name, Field.Store.YES));
doc.add(new IntField("beds", beds, Field.Store.YES));
return doc;
}
public void buildSearchEngine() throws IOException {
IndexWriterConfig config = new IndexWriterConfig(VERSION,
analyzer);
IndexWriter w = new IndexWriter(index, config);
// Generate 10 houses with 0 to 3 beds
for (int i=0;i<10;i++)
w.addDocument(buildDoc("house"+(100+i),i % 4));
w.close();
}
/**
* Execute the query and show the result
*/
public void search(Query q) throws IOException {
System.out.println("executing query\""+q+"\"");
IndexReader reader = DirectoryReader.open(index);
try {
IndexSearcher searcher = new IndexSearcher(reader);
ScoreDoc[] hits = searcher.search(q, 10).scoreDocs;
System.out.println("Found " + hits.length + " hits.");
for (int i = 0; i < hits.length; ++i) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println(""+(i+1)+". " + d.get("name") + ", beds:"
+ d.get("beds"));
}
} finally {
if (reader != null)
reader.close();
}
}
public static void main(String[] args) throws IOException, ParseException {
SmallestEngine me = new SmallestEngine();
me.buildSearchEngine();
System.out.println("SearchByRange");
me.search(NumericRangeQuery.newIntRange("beds", 3, 3,true,true));
System.out.println("-----------------");
System.out.println("SearchName");
me.search(new QueryParser(VERSION,"name",me.analyzer).parse("house107"));
System.out.println("-----------------");
System.out.println("Search3Beds");
me.search(new QueryParser(VERSION,"beds",me.analyzer).parse("3"));
System.out.println("-----------------");
System.out.println("Search3BedsInRange");
me.search(new QueryParser(VERSION,"name",me.analyzer).parse("beds:[3 TO 3]"));
}
}
The output of this program is:
SearchByRange
executing query"beds:[3 TO 3]"
Found 2 hits.
1. house103, beds:3
2. house107, beds:3
-----------------
SearchName
executing query"name:house107"
Found 1 hits.
1. house107, beds:3
-----------------
Search3Beds
executing query"beds:3"
Found 0 hits.
-----------------
Search3BedsInRange
executing query"beds:[3 TO 3]"
Found 0 hits.
You need to use NumericRangeQuery to perform a search on the numeric field.
The answer here could give you some insight.
Also the answer here says
for numeric values (longs, dates, floats, etc.) you need to have NumericRangeQuery. Otherwise Lucene has no idea how do you want to define similarity.
What you need to do is to write your own QueryParser:
public class CustomQueryParser extends QueryParser {
// ctor omitted
#Override
public Query newTermQuery(Term term) {
if (term.field().equals("beds")) {
// manually construct and return non-range query for numeric value
} else {
return super.newTermQuery(term);
}
}
#Override
public Query newRangeQuery(String field, String part1, String part2, boolean startInclusive, boolean endInclusive) {
if (field.equals("beds")) {
// manually construct and return range query for numeric value
} else {
return super.newRangeQuery(field, part1, part2, startInclusive, endInclusive);
}
}
}
It seems like you always have to use the NumericRangeQuery for numeric conditions. (thanks to Mindas) so as he suggested I created My own more intelligent QueryParser.
Using the Apache commons-lang function StringUtils.isNumeric() I can create a more generic QueryParser:
public class IntelligentQueryParser extends QueryParser {
// take over super constructors
#Override
protected org.apache.lucene.search.Query newRangeQuery(String field,
String part1, String part2, boolean part1Inclusive, boolean part2Inclusive) {
if(StringUtils.isNumeric(part1))
{
return NumericRangeQuery.newIntRange(field, Integer.parseInt(part1),Integer.parseInt(part2),part1Inclusive,part2Inclusive);
}
return super.newRangeQuery(field, part1, part2, part1Inclusive, part2Inclusive);
}
#Override
protected org.apache.lucene.search.Query newTermQuery(
org.apache.lucene.index.Term term) {
if(StringUtils.isNumeric(term.text()))
{
return NumericRangeQuery.newIntRange(term.field(), Integer.parseInt(term.text()),Integer.parseInt(term.text()),true,true);
}
return super.newTermQuery(term);
}
}
Just wanted to share this.

xe:objectData - Object has been removed or recycled

I use an xe:objectData as a datasource for a xp:dataTable. objectData1 uses some Java code to retrieve all documents from a view that match a key ( username ). The Java code looks like this:
package com.isatweb.cois;
import static com.ibm.xsp.extlib.util.ExtLibUtil.getCurrentDatabase;
import static com.ibm.xsp.extlib.util.ExtLibUtil.getCurrentSession;
import java.io.Serializable;
import lotus.domino.Database;
import lotus.domino.Name;
import lotus.domino.Session;
import lotus.domino.View;
import lotus.domino.ViewEntryCollection;
public class ObjectDataVisits implements Serializable {
private static final long serialVersionUID = 1L;
ViewEntryCollection vec = null;
public ObjectDataVisits(){
try {
this.update();
} catch (Exception e) {
System.out.print(e);
}
}
public void update() {
try {
Database _db = getCurrentDatabase();
Session _session = getCurrentSession();
Name nam = _session.createName(_session.getEffectiveUserName());
String username = nam.getAbbreviated().replace(" ", "#").replace("/", "#").toUpperCase();
View view = _db.getView("vw_visit_open");
this.vec = view.getAllEntriesByKey(username);
} catch (Exception e) {
System.out.print(e);
}
}
public ViewEntryCollection getVisits(){
return this.vec;
}
}
The XPage has the following code
When I first load the page, the data is read from the wiew and the dataTable displays the NoteIDs of all matching documents.
When I refresh the page using the button, I get an "Object has been removed or recycled" error.
Can anyone pls. show me what I'm doing wrong? ( and perhaps, how to do it right )
The problem is, that Notes objects are not serializable. During the partial refresh the getVisits() method is executed before the update() method. The ViewEntryCollection is a references to a view, and this view is already recycled.
If you just want to store some note id's then you could store them in a Vector instead. Otherwise you have to call your update() method in your getVisits() method everytime.

Spring LDAP Template Usage

Please take a look at the test class below. I am trying to do an LDAP search with Spring LDAP Template. I am able to search and produce a list of entries corresponding to the search criteria without the Spring LDAP template by using the DirContext as shown in the method searchWithoutTemplate(). But when I use a LdapTemplate, I end up with a NPE as shown further below. I am sure I must be missing something. Can someone help please?
import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.naming.ldap.LdapName;
import org.springframework.ldap.core.AttributesMapper;
import org.springframework.ldap.core.LdapTemplate;
import org.springframework.ldap.core.support.DefaultDirObjectFactory;
import org.springframework.ldap.core.support.LdapContextSource;
public class LDAPSearchTest {
//bind params
static String url="ldap://<IP>:<PORT>";
static String userName="cn=Directory Manager";
static String password="password123";
static String bindDN="dc=XXX,dc=com";
//search params
static String base = "ou=StandardUser,ou=XXXCustomers,ou=People,dc=XXX,dc=com";
static String filter = "(objectClass=*)";
static String[] attributeFilter = { "cn", "uid" };
static SearchControls sc = new SearchControls();
public static void main(String[] args) throws Exception {
// sc.setSearchScope(SearchControls.SUBTREE_SCOPE);
sc.setReturningAttributes(attributeFilter);
searchWithTemplate(); //NPE
//searchWithoutTemplate(); //works fine
}
public static void searchWithTemplate() throws Exception {
DefaultDirObjectFactory factory = new DefaultDirObjectFactory();
LdapContextSource cs = new LdapContextSource();
cs.setUrl(url);
cs.setUserDn(userName);
cs.setPassword(password);
cs.setBase(bindDN);
cs.setDirObjectFactory(factory.getClass ());
LdapTemplate template = new LdapTemplate(cs);
template.afterPropertiesSet();
System.out.println((template.search(new LdapName(base), filter, sc,
new AttributesMapper() {
public Object mapFromAttributes(Attributes attrs)
throws NamingException {
System.out.println(attrs);
return attrs.get("uid").get();
}
})));
}
public static void searchWithoutTemplate() throws NamingException{
Hashtable env = new Hashtable(11);
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, url);
//env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, userName);
env.put(Context.SECURITY_CREDENTIALS, password);
DirContext dctx = new InitialDirContext(env);
NamingEnumeration results = dctx.search(base, filter, sc);
while (results.hasMore()) {
SearchResult sr = (SearchResult) results.next();
Attributes attrs = sr.getAttributes();
System.out.println(attrs);
Attribute attr = attrs.get("uid");
}
dctx.close();
}
}
Exception is:
Exception in thread "main" java.lang.NullPointerException
at org.springframework.ldap.core.support.AbstractContextSource.getReadOnlyContext(AbstractContextSource.java:125)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:287)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:237)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:588)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:546)
at LDAPSearchTest.searchWithTemplate(LDAPSearchTest.java:47)
at LDAPSearchTest.main(LDAPSearchTest.java:33)
I am using Spring 2.5.6 and Spring LDAP 1.3.0
A quick scan showed that it's the authenticationSource field of AbstractContextSource that is the culprit. That file includes the following comment on the afterPropertiesSet() method:
/**
* Checks that all necessary data is set and that there is no compatibility
* issues, after which the instance is initialized. Note that you need to
* call this method explicitly after setting all desired properties if using
* the class outside of a Spring Context.
*/
public void afterPropertiesSet() throws Exception {
...
}
That method then goes on to create an appropriate authenticationSource if you haven't provided one.
As your test code above is most definitely not running within a Spring context, and you haven't explicitly set an authenticationSource, I think you need to edit your code as follows:
...
cs.setDirObjectFactory(factory.getClass ());
// Allow Spring to configure the Context Source:
cs.afterPropertiesSet();
LdapTemplate template = new LdapTemplate(cs);

Resources