Best method for retrieving data in XPages and Java - xpages

I am sorry this isn't a 'coding' question, but after some considerable time on the learning path with XPages and Java I still struggle with finding definitive information on the correct way to carry out the basics.
When developing an XPages application using Java which is the best or most efficient method for accessing data?
1) Setting up and maintaining a View, with a column for every field in the document and retrieving data by ViewEntry.getColumnValues().get(int); i.e. not accessing the document and retrieving the data from the view. Which is what I have been doing but my View Columns are continuing to increase along with the hassle of maintaining column sequences. My understanding is this a faster method.
or
2) Just drop everything into a document using a View only when necessary, but in the main using Database.getDocumentByUNID().getItemValueAsString("field") and not worrying about adding lots of columns, far easier to maintain, but is accessing the document slowing things down?

Neither 1) or 2).
Go this way to manage document's data in a Java class:
Read document's items all at once in your Java class into class fields
remember UNID in a class field too
recycle document after reading items
get document again with the help of UNID for every read/write and recycle afterwards
Database.getDocumentByUNID() is quite fast but call it only once for a document and not for every item.
Update
As you mentioned in a comment, you have a database with 50.000 documents of various types.
I'd try to read only those documents you really need. If you want to read e.g. all support ticket documents of a customer you would use a view containing support tickets sorted by customer (without additional columns). You would get the documents with getAllDocumentsByKey(customer, true) and put the Java objects (each based on a document) into a Map.
You can maintain a cache in addition. Create a Map of models (model = Java object for a document type) and use UNID as key. This way you can avoid to read/store documents twice.

This is a really great question. I would first say that I agree 100% with Knut and that's how I code my objects that represent documents.
Below I've pasted a code example of what I typically do. Note this code is using the OpenNTF Domino API which among other things takes care of recycling for me. So you won't see any recycle calls.
But as Knut says - grab the document. Get what you need for it then it can be discarded again. When you want to save something just get the document again. At this stage you could even check lastModified or something to see of another save took place since the time you loaded it.
Sometimes for convenience I overload the load() method and add a NotesViewEntry :
public boolean load(ViewEntry entry) {}
Then in there I could just get the document, in if it's a specific situation use view columns.
Now this works great when dealing with a single document at a time. It works really well if I want to loop through many documents for a collection. But if you get too many you might see some of the overheard start to slow things down. One app I have if I "injest" 30,000 docs like this into a collection it can get a little slow.
I don't have a great answer for this yet. I've tried the big view with many columns thing like it sounds like you did. I've tried creating a lower level basic version of the object with just needed fields and that was more designed to work on viewEntry and their columns. I don't have a great answer for that yet. Making sure you lazy load what you can is pretty important I think.
Anyway here's a code example that shows how I build most of my document driven objects.
package com.notesIn9.video;
import java.io.Serializable;
import java.util.Date;
import org.openntf.domino.Database;
import org.openntf.domino.Document;
import org.openntf.domino.Session;
import org.openntf.domino.View;
import org.openntf.domino.utils.Factory;
public class Episode implements Serializable {
/**
*
*/
private static final long serialVersionUID = 1L;
private String title;
private Double number;
private String authorId;
private String contributorId;
private String summary;
private String subTitle;
private String youTube;
private String libsyn;
private Date publishedDate;
private Double minutes;
private Double seconds;
private String blogLink;
private boolean valid;
private String unid;
private String unique;
private String creator;
public Episode() {
this.unid = "";
}
public void create() {
Session session = Factory.getSession(); // this will be slightly
// different if not using the
// OpenNTF Domino API
this.setUnique(session.getUnique());
this.setCreator(session.getEffectiveUserName());
this.valid = true;
}
public Episode load(Document doc) {
this.loadValues(doc);
return this;
}
public boolean load(String key) {
// this key is the unique key of the document. UNID would be
// faster/easier.. I just kinda hate using them and seeing them in URLS
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
View view = db.getView("lkup_episodes");
Document doc = view.getDocumentByKey(key); // This is deprecated because
// the API prefers to use
// getFirstDocumentByKey
if (null == doc) {
// document not found. DANGER
this.valid = false;
} else {
this.loadValues(doc);
}
return this.valid;
}
private void loadValues(Document doc) {
this.title = doc.getItemValueString("title");
this.number = doc.getItemValueDouble("number");
this.authorId = doc.getItemValueString("authorId");
this.contributorId = doc.getItemValueString("contributorId");
this.summary = doc.getItemValueString("summary");
this.subTitle = doc.getItemValueString("subtitle");
this.youTube = doc.getItemValueString("youTube");
this.libsyn = doc.getItemValueString("libsyn");
this.publishedDate = doc.getItemValue("publishedDate", Date.class);
this.minutes = doc.getItemValueDouble("minutes");
this.seconds = doc.getItemValueDouble("seconds");
this.blogLink = doc.getItemValueString("blogLink");
this.unique = doc.getItemValueString("unique");
this.creator = doc.getItemValueString("creator");
this.unid = doc.getUniversalID();
this.valid = true;
}
public boolean save() {
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
Document doc = null;
if (this.unid.isEmpty()) {
doc = db.createDocument();
doc.replaceItemValue("form", "episode");
this.unid = doc.getUniversalID();
} else {
doc = db.getDocumentByUNID(this.unid);
}
this.saveValues(doc);
return doc.save();
}
private void saveValues(Document doc) {
doc.replaceItemValue("title", this.title);
doc.replaceItemValue("number", this.number);
doc.replaceItemValue("authorId", this.authorId);
doc.replaceItemValue("contributorId", this.contributorId);
doc.replaceItemValue("summary", this.summary);
doc.replaceItemValue("subtitle", this.subTitle);
doc.replaceItemValue("youTube", this.youTube);
doc.replaceItemValue("libsyn", this.libsyn);
doc.replaceItemValue("publishedData", this.publishedDate);
doc.replaceItemValue("minutes", this.minutes);
doc.replaceItemValue("seconds", this.seconds);
doc.replaceItemValue("blogLink", this.blogLink);
doc.replaceItemValue("unique", this.unique);
doc.replaceItemValue("creator", this.creator);
}
// getters and setters removed to condense code.
public boolean remove() {
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
if (this.unid.isEmpty()) {
// this is a new Doc
return false;
} else {
Document doc = db.getDocumentByUNID(this.getUnid());
return doc.remove(true);
}
}
}

It's all about balance. everything has its price. Big views (case 1) slow down indexing. Opening documents every time (case 2) slows down your code.
Find something in between.

Related

Unable to retrieve records from azure storage table when manually inserting records

I am having problem retrieving records from azure storage table when the record is inserted from the portal itself. The table structure is fairly simple:
package com.nielsen.batchJobsManager.storage.entities;
import com.microsoft.azure.storage.table.TableServiceEntity;
public class BatchJobConfigEntity extends TableServiceEntity {
public BatchJobConfigEntity(String jobPrefix, String configName) {
this.partitionKey = jobPrefix;
this.rowKey = configName;
}
public BatchJobConfigEntity() {
}
public String configValue;
public void setConfigValue(String configValue) {
this.configValue = configValue;
}
public String getConfigValue() {
return this.configValue;
}
}
I am just trying to fetch the configValue stored in the table but I am having no luck, as you can see from the screen shot. However I have noticed that if I add the record using java application "TableOperation.insertOrMerge" then it works but I just do not understand why it should matter!
Ok found the solution just trying random stuff! I hope this will come handy for folks who are facing the same issue. So turns out the propertyName must follow camel case but with first character capitalized. So :
had to be changed to :
Only after inserting like that I was able to get the configValue from table entity object correctly.

Need a new method in CatalogEndpoint: findAllProductsForCategory

I have REST API working, but I need a way to return all products within a certain category. Right now I'm using findSearchResultsByCategoryAndQuery by passing the category id and using a very generic query to get almost all the products in the category. What I really need is a method called findAllProductsForCategory which returns all products in the category.
I'm new to Broadleaf and REST API, can someone please guide me through how to extend CatalogEndpoint to get the functionality I need.
Althought broadleaf provides SQL injection protection (ExploitProtectionServiceImpl) I recommend you to use ProductDao.
Extend org.broadleafcommerce.core.web.api.endpoint.catalog.CatalogEndpoint or add in your implementation new method that utilize ProductDao
#Resource("blProductDao")
private ProductDao productDao;
#RequestMapping(value = "search/products-by-category/{categoryId}") //GET is by default
public List<Product> findSearchResultsByCategory(HttpServletRequest request, #PathVariable("categoryId") Long categoryId {
return productDao.readProductsByCategory(categoryId);
}
It's querying database with:
SELECT categoryProduct.product_id
FROM BLC_CATEGORY_PRODUCT_XREF categoryProduct
WHERE categoryProduct.category_id = :categoryId
ORDER BY COALESCE (categoryProduct.display_order,999999)
Or if you want to create your own dao
public class MyProductDaoImpl extends ProductDaoImpl implements MyProductDao {
public static final String QUERY = "SELECT categoryProduct.product_id " +
"FROM BLC_CATEGORY_PRODUCT_XREF categoryProduct " +
"WHERE categoryProduct.category_id = :categoryId";
#Override
public List<Product> meFindingProductsByCategory(String categoryId) {
Query query = em.createQuery(QUERY);
query.setParameter("categoryId", categoryId);
return query.getResultList();
}
}
You can choose if you are producing JSON or XML. Be sure that you have coresponding Product model for binding results

Is this Object Casting pattern acceptable in SharePoint?

I'm creating a SharePoint application, and am trying some new things to create what amounts to an API for Data Access to maintain consistency and conventions.
I haven't seen this before, and that makes me think it might be bad :)
I've overloaded the constructor for class Post to only take an SPListItem as a parameter. I then have an embedded Generic List of Post that takes an SPListItemCollection in the method signature.
I loop through the items in a more efficient for statement, and this means if I ever need to add or modify how the Post object is cast, I can do it in the Class definition for a single source.
class Post
{
public int ID { get; set; }
public string Title { get; set; }
public Post(SPListItem item)
{
ID = item.ID;
Title = (string)item["Title"];
}
public static List<Post> Posts(SPListItemCollection _items)
{
var returnlist = new List<Post>();
for (int i = 0; i < _items.Count; i++) {returnlist.Add(new Post(_items[i]));}
return returnlist;
}
}
This enables me to do the following:
static public List<Post> GetPostsByCommunity(string communityName)
{
var targetList = CoreLists.SystemAccount.Posts(); //CAML emitted for brevity
return Post.Posts(targetList.GetItems(query)); //Call the constructor
}
Is this a bad idea?
This approach might be suitable, but that FOR loop causes some concern. _items.Count will force the SPListItemCollection to retrieve ALL those items in the list from the database. With large lists, this could either a) cause a throttling exception, or b) use up a lot of resources. Why not use a FOREACH loop? With that, I think the SPListItems are retrieved and disposed one at a time.
If I were writing this I would have a 'Posts' class as well 'Post', and give it the constructor accepting the SPListItemCollection.
To be honest, though, the few times I've seen people try and wrap SharePoint SPListItems, it's always ended up seeming more effort than it's worth.
Also, if you're using SharePoint 2010, have you considered using SPMetal?

What approach is good for US State ListBox/DropDownList

What is the best approach to bind US State in WPF (in ListBox or in DropDownList)? Should I use DataTable to bind this data? Is binding DataTable to WPF object right programming approach? Or Should I use class/object. I mean get data from database and convert it to Generic Object List and then bind this list to WPF object?
Thanks,
public class States
{
private string name;
public string Name
{
get
{
return name;
}
}
private string id;
public string Id
{
get
{
return id;
}
}
}
List<States> states = new List<States>();
//get from database
foreach( states DataSource)
{
name = "Alabama";
id = "1";
}
// next Cache list of states for better performance
Many ways...
One approach is to use a list class. get from data source, next cache for better performance.

'Unexpected element: XX' during deserialization MongoDB C#

I'm trying to persist an object into a MongoDB, using the following bit of code:
public class myClass
{
public string Heading { get; set; }
public string Body { get; set; }
}
static void Main(string[] args)
{
var mongo = MongoServer.Create();
var db = mongo.GetDatabase("myDb");
var col = db.GetCollection<BsonDocument>("myCollection");
var myinstance = new myClass();
col.Insert(myinstance);
var query = Query.And(Query.EQ("_id", new ObjectId("4df06c23f0e7e51f087611f7)));
var res = col.Find(query);
foreach (var doc in res)
{
var obj = BsonSerializer.Deserialize<myClass>(doc);
}
}
However I get the following exception 'Unexpected element: _id' when trying to Deserialize the document.
So do I need to Deserialize in another way?? What is the preferred way of doing this?
TIA
Søren
You are searching for a given document using an ObjectId but when you save an instance of MyClass you aren't providing an Id property so the driver will create one for you (you can make any property the id by adding the [BsonId] attribute to it), when you retrieve that document you don't have an Id so you get the deserialization error.
You can add the BsonIgnorExtraElements attribute to the class as Chris said, but you should really add an Id property of type ObjectId to your class, you obviously need the Id (as you are using it in your query). As the _id property is reserved for the primary key, you are only ever going to retrieve a single document so you would be better off writing your query like this:
col.FindOneById(new ObjectId("4df06c23f0e7e51f087611f7"));
The fact that you are deserializing to an instance of MyClass once you retrieve the document lends itself to strongly typing the collection, so where you create an instance of the collection you can do this
var col = db.GetCollection<MyClass>("myCollection");
so that when you retrieve the document using the FindOneById method the driver will take care of the deserialization for you putting it all together (provided you add the Id property to the class) you could write
var col = db.GetCollection<MyClass>("myCollection");
MyClass myClass = col.FindOneById(new ObjectId("4df06c23f0e7e51f087611f7"));
One final thing to note, as the _id property is created for you on save by the driver, if you were to leave it off your MyClass instance, every time you saved that document you would get a new Id and hence a new document, so if you saved it n times you would have n documents, which probably isn't what you want.
A slight variation of Projapati's answer. First Mongo will deserialize the id value happily to a property named Id which is more chsarp-ish. But you don't necessarily need to do this if you are just retrieving data.
You can add [BsonIgnoreExtraElements] to your class and it should work. This will allow you to return a subset of the data, great for queries and view-models.
Try adding _id to your class.
This usually happens when your class doesn't have members for all fields in your document.
public class myClass
{
public ObjectId _id { get; set; }
public string Heading { get; set; }
public string Body { get; set; }
}

Resources