I would like to apply some logic in a drillthrough postprocessor which depends on the location where the drillthrough has been requested.
For instance, let's say I have a risk vector and a list of maturities. If the user executes its drillthrough on the maturity 3M, I would like to display the risk value for that maturity in the drillthrough.
I was thinking about setting a context value with the location, and then retrieve it in the DT post processor, but I would be happy if there is an easier way :)
Regards,
Christophe
In ActivePivot, from a post processed property such as those you can write to customize drillthrough you have indeed access to all the standard attributes of the current drillthrough row.
In the ActivePivot Sandbox application, since version 5.0 there is an example of drillthrough post processed property that extracts the book id that way:
/**
* #author Quartet FS
*/
#QuartetExtendedPluginValue(intf = IPostProcessedProperty.class, key = BookIdColumnPostProcessor.PLUGIN_KEY)
public class BookIdColumnPostProcessor extends APostProcessedProperty {
private static final long serialVersionUID = 1L;
public static final String PLUGIN_KEY = "BookIdColumn";
public BookIdColumnPostProcessor(Properties properties) {
super(properties);
}
#Override
public Object getValue(Object target) {
// Retrieve the value in the BookId column.
BookId book = (BookId) attributeAccessors.get("BookId").getValue(target);
return book.getId();
}
#Override
public String getType() {
return PLUGIN_KEY;
}
}
Thanks for your answers. My question was indeed involving analysis dimensions and vectors.
I have solved it by setting a context value in a custom UpdateDrillthroughFeedHandler as defined on confluence (http://support.quartetfs.com/confluence/display/LIVE/Extensions#Extensions-Customizationentrypoints), and retrieving it in a drillthrough post processor.
Regards,
Christophe
Related
I trying to find the distances along with the locations by using Spring Data Mongo GeoSpatial.
Following this https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#mongo.geo-near
GeoResults<VenueWithDisField> = template.query(Venue.class)
.as(VenueWithDisField.class)
.near(NearQuery.near(new GeoJsonPoint(-73.99, 40.73), KILOMETERS))
.all();
I tried
#Data
#NoArgsConstructor
#AllArgsConstructor
public class RestaurantWithDisField {
private Restaurant restaurant;
private Number dis;
}
#Data
#AllArgsConstructor
#NoArgsConstructor
#Document(collection = "restaurants")
public class Restaurant {
#Id
private String id;
private String name;
#GeoSpatialIndexed(name = "location", type = GeoSpatialIndexType.GEO_2DSPHERE)
private GeoJsonPoint location;
}
public GeoResults<RestaurantWithDisField> findRestaurantsNear(GeoJsonPoint point, Distance distance) {
final NearQuery nearQuery = NearQuery.near(point)
.maxDistance(distance)
.spherical(true);
return mongoTemplate.query(Restaurant.class)
.as(RestaurantWithDisField.class)
.near(nearQuery)
.all();
}
But in the result I am getting the below. If I don't set the target type and just collect the domain type I get all the other values but the distance.
Restaurant - RestaurantWithDisField(restaurant=null, dis=0.12914248082237584
Restaurant - RestaurantWithDisField(restaurant=null, dis=0.19842138954997746)
Restaurant - RestaurantWithDisField(restaurant=null, dis=0.20019522190348576)
Can someone please help me why I am unable to fetch the domain type value or how should I?
Thank you
The mapping fails to resolve restaurant in RestaurantWithDisField because the values within the result Document do not match the target entities properties.
You might want to use inheritance instead of composition here and let RestaurantWithDisField extend Restaurant, provide your own converter or just use Restaurant and rely on GeoResults holding a list of GeoResult that already include the Distance along with the actual mapped domain type - pretty much the same, that you've be modelling with RestaurantWithDisField.
Spring Data Mongo repository can generate the right query for you if you name it correctly and put the data into your domain POJO. I was able to find examples here - blocking or here - reactive.
interface RestaurantRepository extends MongoRepository<Restaurant, String> {
Collection<GeoResult<Restaurant>> findByName(String name, Point location);
}
From what I see, the (Reactive)MongoTemplate uses a GeoNearResultDocumentCallback to wrap the restaurant in a GeoResult. You may want to take a look there.
I have REST API working, but I need a way to return all products within a certain category. Right now I'm using findSearchResultsByCategoryAndQuery by passing the category id and using a very generic query to get almost all the products in the category. What I really need is a method called findAllProductsForCategory which returns all products in the category.
I'm new to Broadleaf and REST API, can someone please guide me through how to extend CatalogEndpoint to get the functionality I need.
Althought broadleaf provides SQL injection protection (ExploitProtectionServiceImpl) I recommend you to use ProductDao.
Extend org.broadleafcommerce.core.web.api.endpoint.catalog.CatalogEndpoint or add in your implementation new method that utilize ProductDao
#Resource("blProductDao")
private ProductDao productDao;
#RequestMapping(value = "search/products-by-category/{categoryId}") //GET is by default
public List<Product> findSearchResultsByCategory(HttpServletRequest request, #PathVariable("categoryId") Long categoryId {
return productDao.readProductsByCategory(categoryId);
}
It's querying database with:
SELECT categoryProduct.product_id
FROM BLC_CATEGORY_PRODUCT_XREF categoryProduct
WHERE categoryProduct.category_id = :categoryId
ORDER BY COALESCE (categoryProduct.display_order,999999)
Or if you want to create your own dao
public class MyProductDaoImpl extends ProductDaoImpl implements MyProductDao {
public static final String QUERY = "SELECT categoryProduct.product_id " +
"FROM BLC_CATEGORY_PRODUCT_XREF categoryProduct " +
"WHERE categoryProduct.category_id = :categoryId";
#Override
public List<Product> meFindingProductsByCategory(String categoryId) {
Query query = em.createQuery(QUERY);
query.setParameter("categoryId", categoryId);
return query.getResultList();
}
}
You can choose if you are producing JSON or XML. Be sure that you have coresponding Product model for binding results
I am sorry this isn't a 'coding' question, but after some considerable time on the learning path with XPages and Java I still struggle with finding definitive information on the correct way to carry out the basics.
When developing an XPages application using Java which is the best or most efficient method for accessing data?
1) Setting up and maintaining a View, with a column for every field in the document and retrieving data by ViewEntry.getColumnValues().get(int); i.e. not accessing the document and retrieving the data from the view. Which is what I have been doing but my View Columns are continuing to increase along with the hassle of maintaining column sequences. My understanding is this a faster method.
or
2) Just drop everything into a document using a View only when necessary, but in the main using Database.getDocumentByUNID().getItemValueAsString("field") and not worrying about adding lots of columns, far easier to maintain, but is accessing the document slowing things down?
Neither 1) or 2).
Go this way to manage document's data in a Java class:
Read document's items all at once in your Java class into class fields
remember UNID in a class field too
recycle document after reading items
get document again with the help of UNID for every read/write and recycle afterwards
Database.getDocumentByUNID() is quite fast but call it only once for a document and not for every item.
Update
As you mentioned in a comment, you have a database with 50.000 documents of various types.
I'd try to read only those documents you really need. If you want to read e.g. all support ticket documents of a customer you would use a view containing support tickets sorted by customer (without additional columns). You would get the documents with getAllDocumentsByKey(customer, true) and put the Java objects (each based on a document) into a Map.
You can maintain a cache in addition. Create a Map of models (model = Java object for a document type) and use UNID as key. This way you can avoid to read/store documents twice.
This is a really great question. I would first say that I agree 100% with Knut and that's how I code my objects that represent documents.
Below I've pasted a code example of what I typically do. Note this code is using the OpenNTF Domino API which among other things takes care of recycling for me. So you won't see any recycle calls.
But as Knut says - grab the document. Get what you need for it then it can be discarded again. When you want to save something just get the document again. At this stage you could even check lastModified or something to see of another save took place since the time you loaded it.
Sometimes for convenience I overload the load() method and add a NotesViewEntry :
public boolean load(ViewEntry entry) {}
Then in there I could just get the document, in if it's a specific situation use view columns.
Now this works great when dealing with a single document at a time. It works really well if I want to loop through many documents for a collection. But if you get too many you might see some of the overheard start to slow things down. One app I have if I "injest" 30,000 docs like this into a collection it can get a little slow.
I don't have a great answer for this yet. I've tried the big view with many columns thing like it sounds like you did. I've tried creating a lower level basic version of the object with just needed fields and that was more designed to work on viewEntry and their columns. I don't have a great answer for that yet. Making sure you lazy load what you can is pretty important I think.
Anyway here's a code example that shows how I build most of my document driven objects.
package com.notesIn9.video;
import java.io.Serializable;
import java.util.Date;
import org.openntf.domino.Database;
import org.openntf.domino.Document;
import org.openntf.domino.Session;
import org.openntf.domino.View;
import org.openntf.domino.utils.Factory;
public class Episode implements Serializable {
/**
*
*/
private static final long serialVersionUID = 1L;
private String title;
private Double number;
private String authorId;
private String contributorId;
private String summary;
private String subTitle;
private String youTube;
private String libsyn;
private Date publishedDate;
private Double minutes;
private Double seconds;
private String blogLink;
private boolean valid;
private String unid;
private String unique;
private String creator;
public Episode() {
this.unid = "";
}
public void create() {
Session session = Factory.getSession(); // this will be slightly
// different if not using the
// OpenNTF Domino API
this.setUnique(session.getUnique());
this.setCreator(session.getEffectiveUserName());
this.valid = true;
}
public Episode load(Document doc) {
this.loadValues(doc);
return this;
}
public boolean load(String key) {
// this key is the unique key of the document. UNID would be
// faster/easier.. I just kinda hate using them and seeing them in URLS
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
View view = db.getView("lkup_episodes");
Document doc = view.getDocumentByKey(key); // This is deprecated because
// the API prefers to use
// getFirstDocumentByKey
if (null == doc) {
// document not found. DANGER
this.valid = false;
} else {
this.loadValues(doc);
}
return this.valid;
}
private void loadValues(Document doc) {
this.title = doc.getItemValueString("title");
this.number = doc.getItemValueDouble("number");
this.authorId = doc.getItemValueString("authorId");
this.contributorId = doc.getItemValueString("contributorId");
this.summary = doc.getItemValueString("summary");
this.subTitle = doc.getItemValueString("subtitle");
this.youTube = doc.getItemValueString("youTube");
this.libsyn = doc.getItemValueString("libsyn");
this.publishedDate = doc.getItemValue("publishedDate", Date.class);
this.minutes = doc.getItemValueDouble("minutes");
this.seconds = doc.getItemValueDouble("seconds");
this.blogLink = doc.getItemValueString("blogLink");
this.unique = doc.getItemValueString("unique");
this.creator = doc.getItemValueString("creator");
this.unid = doc.getUniversalID();
this.valid = true;
}
public boolean save() {
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
Document doc = null;
if (this.unid.isEmpty()) {
doc = db.createDocument();
doc.replaceItemValue("form", "episode");
this.unid = doc.getUniversalID();
} else {
doc = db.getDocumentByUNID(this.unid);
}
this.saveValues(doc);
return doc.save();
}
private void saveValues(Document doc) {
doc.replaceItemValue("title", this.title);
doc.replaceItemValue("number", this.number);
doc.replaceItemValue("authorId", this.authorId);
doc.replaceItemValue("contributorId", this.contributorId);
doc.replaceItemValue("summary", this.summary);
doc.replaceItemValue("subtitle", this.subTitle);
doc.replaceItemValue("youTube", this.youTube);
doc.replaceItemValue("libsyn", this.libsyn);
doc.replaceItemValue("publishedData", this.publishedDate);
doc.replaceItemValue("minutes", this.minutes);
doc.replaceItemValue("seconds", this.seconds);
doc.replaceItemValue("blogLink", this.blogLink);
doc.replaceItemValue("unique", this.unique);
doc.replaceItemValue("creator", this.creator);
}
// getters and setters removed to condense code.
public boolean remove() {
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
if (this.unid.isEmpty()) {
// this is a new Doc
return false;
} else {
Document doc = db.getDocumentByUNID(this.getUnid());
return doc.remove(true);
}
}
}
It's all about balance. everything has its price. Big views (case 1) slow down indexing. Opening documents every time (case 2) slows down your code.
Find something in between.
I need to display a large table with about 1300 roles at one time. (I know I should use a data scroll but my users want to see the whole table at one time.) The table displays 4 columns. Two of those columns are from the object but the other two are from referenced objects in the original object. I need to find the best/efficient way to do this. I currently have this working but when I reload the table it give an out of memory error. I think that it's caused by the large amount of redundant data in memory.
Create a view object that the repository will fill in only the needed fields.
Any other suggestions.
Here are the objects:
public class Database extends EntityObject {
private Long id;
private String name;
private String connectionString;
private String username;
private String password;
private String description;
// getter and setters omitted
}
public class Application extends EntityObject {
private Long id;
private String name;
private String fullName = "";
private String description;
private Database database;
private List<Role> roles = new ArrayList<Role>(0);
// getter and setters omitted
}
public class Role extends EntityObject {
private Long id;
private String name;
private String nameOnDatabase;
private Application application;
// getter and setters omitted
}
What I need displayed from the list of Roles is:
role.id, role.name, role.application.name, role.application.database.name
To optimize wisely, define what are you going to do with data, view or/and edit. Here are some common scenarios:
Retrieval using lazy fetch type.
Mark your roles in application with FetchType.LAZY annotation.
Retrieval using multiselect query. Create your custom class (like DTO) and populate it with data from the database using multiselect query. (Similar to VIEW mapped as Entity)
There are also other possibilities, such as Shared (L2) Entity Cache or Retrieval by Refresh.
See if you are using EntityManager correctly reading Am I supposed to call EntityManager.clear() often to avoid memory leaks?.
I want to abstract the implementation of my Azure TableServiceEntities so that I have one entity, that will take an object, of any type, use the properties of that object as the properties in the TableServiceEntity.
so my base object would be like
public class SomeObject
{
[EntityAttribute(PartitionKey=true)]
public string OneProperty {get; set:}
[EntityAttribute(RowKey=true)]
public string TwoProperty {get; set;}
public string SomeOtherProperty {get;set;}
}
public class SomeEntity<T> : TableServiceEntity
{
public SomeEntity(T obj)
{
foreach (var propertyInfo in properties)
{
object[] attributes = propertyInfo.GetCustomAttributes(typeof (DataObjectAttributes), false);
foreach (var attribute in attributes)
{
DataObjectAttributes doa = (DataObjectAttributes) attribute;
if (doa.PartitionKey)
PartitionKey = propertyInfo.Name;
}
}
}
}
Then I could access the entity in the context like this
var objects =
(from entity in context.CreateQuery<SomeEntity>("SomeEntities") select entity);
var entityList = objects.ToList();
foreach (var obj in entityList)
{
var someObject = new SomeObject();
SomeObject.OneProperty = obj.OneProperty;
SomeObject.TwoProperty = obj.TwoProperty;
}
This doesn't seem like it should be that difficult, but I have a feeling I have been looking at too many possible solutions and have just managed to confuse myself.
Thanks for any pointers.
Take a look at Lokad Cloud O/C mapper I think the source code imitates what you're attempting, but has insightful rationale about its different approach to Azure table storage.
http://lokadcloud.codeplex.com/
I have written an alternate Azure table storage client in F#, Lucifure Stash, which supports many abstractions including persisting a dictionary object. Lucifure Stash also supports large data columns > 64K, arrays & lists, enumerations, out of the box serialization, user defined morphing, public and private properties and fields and more.
It is available free for personal use at http://www.lucifure.com or via NuGet.com.
What you are attempting to achieve, a single generic class for any entity, can be implemented in Lucifure Stash by using the [StashPool] attribute on a dictionary type.
I have written a blog post about the table storage context, entities by specifying the entity type. Maybe it can help you http://wblo.gs/a2G
It seems you still want to use concrete types. Thus, the SomeEntity is a bit redundant. Actually, TableServiceEntity is already an abstract class. You can derive SomeObject from TableServiceEntity. From my experience, this won’t introduce any issues to your scenario.
In addition, even with your custom SomeEntity, it is failed to remove the dependence on the concrete SomeObject class in your last piece of code anyway.
Best Regards,
Ming Xu.