Need a new method in CatalogEndpoint: findAllProductsForCategory - broadleaf-commerce

I have REST API working, but I need a way to return all products within a certain category. Right now I'm using findSearchResultsByCategoryAndQuery by passing the category id and using a very generic query to get almost all the products in the category. What I really need is a method called findAllProductsForCategory which returns all products in the category.
I'm new to Broadleaf and REST API, can someone please guide me through how to extend CatalogEndpoint to get the functionality I need.

Althought broadleaf provides SQL injection protection (ExploitProtectionServiceImpl) I recommend you to use ProductDao.
Extend org.broadleafcommerce.core.web.api.endpoint.catalog.CatalogEndpoint or add in your implementation new method that utilize ProductDao
#Resource("blProductDao")
private ProductDao productDao;
#RequestMapping(value = "search/products-by-category/{categoryId}") //GET is by default
public List<Product> findSearchResultsByCategory(HttpServletRequest request, #PathVariable("categoryId") Long categoryId {
return productDao.readProductsByCategory(categoryId);
}
It's querying database with:
SELECT categoryProduct.product_id
FROM BLC_CATEGORY_PRODUCT_XREF categoryProduct
WHERE categoryProduct.category_id = :categoryId
ORDER BY COALESCE (categoryProduct.display_order,999999)
Or if you want to create your own dao
public class MyProductDaoImpl extends ProductDaoImpl implements MyProductDao {
public static final String QUERY = "SELECT categoryProduct.product_id " +
"FROM BLC_CATEGORY_PRODUCT_XREF categoryProduct " +
"WHERE categoryProduct.category_id = :categoryId";
#Override
public List<Product> meFindingProductsByCategory(String categoryId) {
Query query = em.createQuery(QUERY);
query.setParameter("categoryId", categoryId);
return query.getResultList();
}
}
You can choose if you are producing JSON or XML. Be sure that you have coresponding Product model for binding results

Related

Spring data MongoDB GeoSpatial Distance

I trying to find the distances along with the locations by using Spring Data Mongo GeoSpatial.
Following this https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#mongo.geo-near
GeoResults<VenueWithDisField> = template.query(Venue.class)
.as(VenueWithDisField.class)
.near(NearQuery.near(new GeoJsonPoint(-73.99, 40.73), KILOMETERS))
.all();
I tried
#Data
#NoArgsConstructor
#AllArgsConstructor
public class RestaurantWithDisField {
private Restaurant restaurant;
private Number dis;
}
#Data
#AllArgsConstructor
#NoArgsConstructor
#Document(collection = "restaurants")
public class Restaurant {
#Id
private String id;
private String name;
#GeoSpatialIndexed(name = "location", type = GeoSpatialIndexType.GEO_2DSPHERE)
private GeoJsonPoint location;
}
public GeoResults<RestaurantWithDisField> findRestaurantsNear(GeoJsonPoint point, Distance distance) {
final NearQuery nearQuery = NearQuery.near(point)
.maxDistance(distance)
.spherical(true);
return mongoTemplate.query(Restaurant.class)
.as(RestaurantWithDisField.class)
.near(nearQuery)
.all();
}
But in the result I am getting the below. If I don't set the target type and just collect the domain type I get all the other values but the distance.
Restaurant - RestaurantWithDisField(restaurant=null, dis=0.12914248082237584
Restaurant - RestaurantWithDisField(restaurant=null, dis=0.19842138954997746)
Restaurant - RestaurantWithDisField(restaurant=null, dis=0.20019522190348576)
Can someone please help me why I am unable to fetch the domain type value or how should I?
Thank you
The mapping fails to resolve restaurant in RestaurantWithDisField because the values within the result Document do not match the target entities properties.
You might want to use inheritance instead of composition here and let RestaurantWithDisField extend Restaurant, provide your own converter or just use Restaurant and rely on GeoResults holding a list of GeoResult that already include the Distance along with the actual mapped domain type - pretty much the same, that you've be modelling with RestaurantWithDisField.
Spring Data Mongo repository can generate the right query for you if you name it correctly and put the data into your domain POJO. I was able to find examples here - blocking or here - reactive.
interface RestaurantRepository extends MongoRepository<Restaurant, String> {
Collection<GeoResult<Restaurant>> findByName(String name, Point location);
}
From what I see, the (Reactive)MongoTemplate uses a GeoNearResultDocumentCallback to wrap the restaurant in a GeoResult. You may want to take a look there.

How to map object references in mapstruct using JHipster?

Let's say that you create a JHipster app for a Blog with Posts using a JDL script like this and you want to have a BlogDTO that shows the Posts within it (and a BlogDTO that shows the Comments each Post has):
entity Blog {
creationDate Instant required
title String minlength(2) maxlength(100) required
}
entity Post {
creationDate Instant required
headline String minlength(2) maxlength(100) required
bodytext String minlength(2) maxlength(1000) required
image ImageBlob
}
entity Comment {
creationDate Instant required
commentText String minlength(2) maxlength(1000) required
}
// RELATIONSHIPS:
relationship OneToMany {
Blog to Post{blog required}
Post{comment} to Comment{post(headline) required}
}
// Set pagination options
paginate all with pagination
// DTOs for all
dto * with mapstruct
// Set service options to all except few
service all with serviceClass
// Filtering
filter *
Jhipster will create your Blog, Post and Comment entities with their DTOs and makes the assumption that you do not want to populate the Blog with the Posts or the Posts with the comments, so your BlogMapper will look like this:
#Mapper(componentModel = "spring", uses = {})
public interface BlogMapper extends EntityMapper<BlogDTO, Blog> {
#Mapping(target = "posts", ignore = true)
Blog toEntity(BlogDTO blogDTO);
default Blog fromId(Long id) {
if (id == null) {
return null;
}
Blog blog = new Blog();
blog.setId(id);
return blog;
}
}
with a BlogDTO like this:
public class BlogDTO implements Serializable {
private Long id;
#NotNull
private Instant creationDate;
#NotNull
#Size(min = 2, max = 100)
private String title;
//GETTERS, SETTERS, HASHCODE, EQUALS & TOSTRING
Can anybody help to modify the code so the BlogDTO will show the Posts (and the PostDTO will show the Comments). Thanks
PD: Because I changed the Annotation to include the PostMapper class
#Mapper(componentModel = "spring", uses = {PostMapper.class})
And the #Mapping(target = "posts", ignore = false) to FALSE but it does not work. The API example (Swagger) look fine, but then the PostDTO is null (even when the data is there).
Add a Set<PostDTO> posts; to your BlogDTO and a Set<CommentDTO> comments; to your PostDTO. Also add getters and setters for those fields in the DTO files. Then in your mappers, make sure that the BlogMapper uses PostMapper and that the PostMapper uses CommentMapper.
You may also need to configure the caching annotations on the posts field in Blog.java and the comments field in Post.java to fit your use-case. With NONSTRICT_READ_WRITE, there can be a delay in updating the cache, resulting in stale data returned by the API.

Best method for retrieving data in XPages and Java

I am sorry this isn't a 'coding' question, but after some considerable time on the learning path with XPages and Java I still struggle with finding definitive information on the correct way to carry out the basics.
When developing an XPages application using Java which is the best or most efficient method for accessing data?
1) Setting up and maintaining a View, with a column for every field in the document and retrieving data by ViewEntry.getColumnValues().get(int); i.e. not accessing the document and retrieving the data from the view. Which is what I have been doing but my View Columns are continuing to increase along with the hassle of maintaining column sequences. My understanding is this a faster method.
or
2) Just drop everything into a document using a View only when necessary, but in the main using Database.getDocumentByUNID().getItemValueAsString("field") and not worrying about adding lots of columns, far easier to maintain, but is accessing the document slowing things down?
Neither 1) or 2).
Go this way to manage document's data in a Java class:
Read document's items all at once in your Java class into class fields
remember UNID in a class field too
recycle document after reading items
get document again with the help of UNID for every read/write and recycle afterwards
Database.getDocumentByUNID() is quite fast but call it only once for a document and not for every item.
Update
As you mentioned in a comment, you have a database with 50.000 documents of various types.
I'd try to read only those documents you really need. If you want to read e.g. all support ticket documents of a customer you would use a view containing support tickets sorted by customer (without additional columns). You would get the documents with getAllDocumentsByKey(customer, true) and put the Java objects (each based on a document) into a Map.
You can maintain a cache in addition. Create a Map of models (model = Java object for a document type) and use UNID as key. This way you can avoid to read/store documents twice.
This is a really great question. I would first say that I agree 100% with Knut and that's how I code my objects that represent documents.
Below I've pasted a code example of what I typically do. Note this code is using the OpenNTF Domino API which among other things takes care of recycling for me. So you won't see any recycle calls.
But as Knut says - grab the document. Get what you need for it then it can be discarded again. When you want to save something just get the document again. At this stage you could even check lastModified or something to see of another save took place since the time you loaded it.
Sometimes for convenience I overload the load() method and add a NotesViewEntry :
public boolean load(ViewEntry entry) {}
Then in there I could just get the document, in if it's a specific situation use view columns.
Now this works great when dealing with a single document at a time. It works really well if I want to loop through many documents for a collection. But if you get too many you might see some of the overheard start to slow things down. One app I have if I "injest" 30,000 docs like this into a collection it can get a little slow.
I don't have a great answer for this yet. I've tried the big view with many columns thing like it sounds like you did. I've tried creating a lower level basic version of the object with just needed fields and that was more designed to work on viewEntry and their columns. I don't have a great answer for that yet. Making sure you lazy load what you can is pretty important I think.
Anyway here's a code example that shows how I build most of my document driven objects.
package com.notesIn9.video;
import java.io.Serializable;
import java.util.Date;
import org.openntf.domino.Database;
import org.openntf.domino.Document;
import org.openntf.domino.Session;
import org.openntf.domino.View;
import org.openntf.domino.utils.Factory;
public class Episode implements Serializable {
/**
*
*/
private static final long serialVersionUID = 1L;
private String title;
private Double number;
private String authorId;
private String contributorId;
private String summary;
private String subTitle;
private String youTube;
private String libsyn;
private Date publishedDate;
private Double minutes;
private Double seconds;
private String blogLink;
private boolean valid;
private String unid;
private String unique;
private String creator;
public Episode() {
this.unid = "";
}
public void create() {
Session session = Factory.getSession(); // this will be slightly
// different if not using the
// OpenNTF Domino API
this.setUnique(session.getUnique());
this.setCreator(session.getEffectiveUserName());
this.valid = true;
}
public Episode load(Document doc) {
this.loadValues(doc);
return this;
}
public boolean load(String key) {
// this key is the unique key of the document. UNID would be
// faster/easier.. I just kinda hate using them and seeing them in URLS
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
View view = db.getView("lkup_episodes");
Document doc = view.getDocumentByKey(key); // This is deprecated because
// the API prefers to use
// getFirstDocumentByKey
if (null == doc) {
// document not found. DANGER
this.valid = false;
} else {
this.loadValues(doc);
}
return this.valid;
}
private void loadValues(Document doc) {
this.title = doc.getItemValueString("title");
this.number = doc.getItemValueDouble("number");
this.authorId = doc.getItemValueString("authorId");
this.contributorId = doc.getItemValueString("contributorId");
this.summary = doc.getItemValueString("summary");
this.subTitle = doc.getItemValueString("subtitle");
this.youTube = doc.getItemValueString("youTube");
this.libsyn = doc.getItemValueString("libsyn");
this.publishedDate = doc.getItemValue("publishedDate", Date.class);
this.minutes = doc.getItemValueDouble("minutes");
this.seconds = doc.getItemValueDouble("seconds");
this.blogLink = doc.getItemValueString("blogLink");
this.unique = doc.getItemValueString("unique");
this.creator = doc.getItemValueString("creator");
this.unid = doc.getUniversalID();
this.valid = true;
}
public boolean save() {
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
Document doc = null;
if (this.unid.isEmpty()) {
doc = db.createDocument();
doc.replaceItemValue("form", "episode");
this.unid = doc.getUniversalID();
} else {
doc = db.getDocumentByUNID(this.unid);
}
this.saveValues(doc);
return doc.save();
}
private void saveValues(Document doc) {
doc.replaceItemValue("title", this.title);
doc.replaceItemValue("number", this.number);
doc.replaceItemValue("authorId", this.authorId);
doc.replaceItemValue("contributorId", this.contributorId);
doc.replaceItemValue("summary", this.summary);
doc.replaceItemValue("subtitle", this.subTitle);
doc.replaceItemValue("youTube", this.youTube);
doc.replaceItemValue("libsyn", this.libsyn);
doc.replaceItemValue("publishedData", this.publishedDate);
doc.replaceItemValue("minutes", this.minutes);
doc.replaceItemValue("seconds", this.seconds);
doc.replaceItemValue("blogLink", this.blogLink);
doc.replaceItemValue("unique", this.unique);
doc.replaceItemValue("creator", this.creator);
}
// getters and setters removed to condense code.
public boolean remove() {
Session session = Factory.getSession();
Database currentDb = session.getCurrentDatabase();
Database db = session.getDatabase(currentDb.getServer(), "episodes.nsf");
if (this.unid.isEmpty()) {
// this is a new Doc
return false;
} else {
Document doc = db.getDocumentByUNID(this.getUnid());
return doc.remove(true);
}
}
}
It's all about balance. everything has its price. Big views (case 1) slow down indexing. Opening documents every time (case 2) slows down your code.
Find something in between.

Add behavior to existing implementation - C# / Design Pattern

My current implementation for service and business layer is straight forward as below.
public class MyEntity { }
// Business layer
public interface IBusiness { IList<MyEntity> GetEntities(); }
public class MyBusinessOne : IBusiness
{
public IList<MyEntity> GetEntities()
{
return new List<MyEntity>();
}
}
//factory
public static class Factory
{
public static T Create<T>() where T : class
{
return new MyBusinessOne() as T; // returns instance based on T
}
}
//Service layer
public class MyService
{
public IList<MyEntity> GetEntities()
{
return Factory.Create<IBusiness>().GetEntities();
}
}
We needed some changes in current implementation. Reason being data grew over the time and service & client cannot handle the volume of data. we needed to implement pagination to the current service. We also expect some more features (like return fault when data is more that threshold, apply filters etc), so the design needs to be updated.
Following is my new proposal.
public interface IBusiness
{
IList<MyEntity> GetEntities();
}
public interface IBehavior
{
IEnumerable<T> Apply<T>(IEnumerable<T> data);
}
public abstract class MyBusiness
{
protected List<IBehavior> Behaviors = new List<IBehavior>();
public void AddBehavior(IBehavior behavior)
{
Behaviors.Add(behavior);
}
}
public class PaginationBehavior : IBehavior
{
public int PageSize = 10;
public int PageNumber = 2;
public IEnumerable<T> Apply<T>(IEnumerable<T> data)
{
//apply behavior here
return data
.Skip(PageNumber * PageSize)
.Take(PageSize);
}
}
public class MyEntity { }
public class MyBusinessOne : MyBusiness, IBusiness
{
public IList<MyEntity> GetEntities()
{
IEnumerable<MyEntity> result = new List<MyEntity>();
this.Behaviors.ForEach(rs =>
{
result = rs.Apply<MyEntity>(result);
});
return result.ToList();
}
}
public static class Factory
{
public static T Create<T>(List<IBehavior> behaviors) where T : class
{
// returns instance based on T
var instance = new MyBusinessOne();
behaviors.ForEach(rs => instance.AddBehavior(rs));
return instance as T;
}
}
public class MyService
{
public IList<MyEntity> GetEntities(int currentPage)
{
List<IBehavior> behaviors = new List<IBehavior>() {
new PaginationBehavior() { PageNumber = currentPage, }
};
return Factory.Create<IBusiness>(behaviors).GetEntities();
}
}
Experts please suggest me if my implementation is correct or I am over killing it. If it correct what design pattern it is - Decorator or Visitor.
Also my service returns JSON string. How can I use this behavior collections to serialize only selected properties rather than entire entity. List of properties comes from user as request. (Kind of column picker)
Looks like I don't have enough points to comment on your question. So, I am gonna make some assumption as I am not a C# expert.
Assumption 1: Looks like you are getting the data first and then applying the pagination using behavior object. If so, this is a wrong approach. Lets say there are 500 records and you are showing 50 records per fetch. Instead of simply fetching 50 records from DB, you are fetching 500 records for 10 times and on top of it you are adding a costly filter. DB is better equipped to do this job that C# or Java.
I would not consider pagination as a behavior with respect to the service. Its the behavior of the presentation layer. Your service should only worry about 'Data Granularity'. Looks like one of your customer wants all the data in one go and others might want a subset of that data.
Option 1: In DAO layer, have two methods: one for pagination and other for regular fetch. Based on the incoming params decide which method to call.
Option 2: Create two methods at service level. One for a small subset of data and the other for the whole set of data. Since you said JSON, this should be Restful service. Then based on the incoming URL, properly call the correct method. If you use Jersey, this should be easy.
In a service, new behaviors can be added by simply exposing new methods or adding new params to existing methods/functionalities (just make sure those changes are backward compatible). We really don't need Decorator or Visitor pattern. The only concern is no existing user should be affected.

"Lambda Parameter not in scope" exception using SimpleRepository's Single method

I'm attempting to use the SimpleRepository to perform a fetch based on a non-ID property. Here's the Customer class I'm using:
[Serializable]
public class Customer : IEntity<Guid>
{
public Guid ProviderUserKey { get; set; }
public Guid ID
{
get; set;
}
}
I'm using SimpleRepository with migrations turned on. The code that throws the "Lambda Parameter not in scope" is below:
public class CustomerRepository :
ICustomerRepository
{
private readonly IRepository _impl;
public CustomerRepository(string connectionStringName)
{
_impl = new SimpleRepository(connectionStringName,
SimpleRepositoryOptions.RunMigrations);
}
public Customer GetCustomer(string userName)
{
var user = Membership.GetUser(userName);
// Code to guard against a missing user would go here
// This line throws the exception
var customer = _impl.Single<Customer>(c => c.ProviderUserKey.Equals(user.ProviderUserKey));
// Code to create a new customer based on the
// ASP.NET Membership user would go here
return customer;
}
}
I'm not sure at what point in the LINQ expression compilation this throws, but I am running this example on an empty database. The schema generations gets far enough to create the table structure, but can't evaluate the expression.
Does anyone know what I might be doing wrong?
Thanks!
I've had reports of this - can you add this (and your code) as an issue please?

Resources