How to persist PanacheEntity avoiding duplicate key exception? - quarkus-panache

I want to persist an entity. I want to skip it in case it already exists in the datastore. Assume the name field is part of the primary key. Assume p1 exists in the datastore. Only p2 should be inserted. Inserting p1 produces duplicate key exception.
#Entity
public class PersonEntity extends PanacheEntity {
String name;
public PersonEntity(String name){
this.name=name;
}
public static Uni<PersonEntity> findByName(String name) {
return find("name", name).firstResult();
}
}
#QuarkusTest
public class PersonResourceTest {
#Test
#ReactiveTransactional
void persistListOfPersons() {
List<PersonEntity> persons = List.of(new PersonEntity("p1"), new PersonEntity("p2"));
Predicate<PersonEntity> personExists = entity -> {
//How to consume Uni?
Uni<PersonEntity> entityUni = PersonEntity.findByName(entity.name);
//entityUni.onItem().ifNull().continueWith(???);
//include entity in filtered stream
//return true;
//exclude entity from filtered stream
return false;
};
List<PersonEntity> filteredPersons = persons.stream().filter(personExists).toList();
PersonEntity.persist(filteredPersons);
}
}
I can't produce a valid filter predicate. I need a boolean value somehow produced by the person query. But how?
This should serve as a minimum reproducable example.

Related

Apache Calcite - ReflectiveSchema StackoverflowError

I'm trying to create a simple schema using ReflectiveSchema and then trying to project an Employee "table" using Groovy as my programming language. Code below.
class CalciteDemo {
String doDemo() {
RelNode node = new CalciteAlgebraBuilder().build()
return RelOptUtil.toString(node)
}
class DummySchema {
public final Employee[] emp = [new Employee(1, "Ting"), new Employee(2, "Tong")]
#Override
String toString() {
return "DummySchema"
}
class Employee {
Employee(int id, String name) {
this.id = id
this.name = name
}
public final int id
public final String name
}
}
class CalciteAlgebraBuilder {
FrameworkConfig config
CalciteAlgebraBuilder() {
SchemaPlus rootSchema = Frameworks.createRootSchema(true)
Schema schema = new ReflectiveSchema(new DummySchema())
SchemaPlus rootPlusDummy = rootSchema.add("dummySchema", schema)
this.config = Frameworks.newConfigBuilder().parserConfig(SqlParser.Config.DEFAULT).defaultSchema(rootPlusDummy).traitDefs((List<RelTraitDef>)null).build()
}
RelNode build() {
RelBuilder.create(config).scan("emp").build()
}
}
}
I seem to be correctly passing in the "schema" object to the constructor of the ReflectiveSchema class, but I think its failing while trying to get the fields of the Employee class.
Here's the error
java.lang.StackOverflowError
at java.lang.Class.copyFields(Class.java:3115)
at java.lang.Class.getFields(Class.java:1557)
at org.apache.calcite.jdbc.JavaTypeFactoryImpl.createStructType(JavaTypeFactoryImpl.java:76)
at org.apache.calcite.jdbc.JavaTypeFactoryImpl.createType(JavaTypeFactoryImpl.java:160)
at org.apache.calcite.jdbc.JavaTypeFactoryImpl.createType(JavaTypeFactoryImpl.java:151)
at org.apache.calcite.jdbc.JavaTypeFactoryImpl.createStructType(JavaTypeFactoryImpl.java:84)
at org.apache.calcite.jdbc.JavaTypeFactoryImpl.createType(JavaTypeFactoryImpl.java:160)
at org.apache.calcite.jdbc.JavaTypeFactoryImpl.createStructType(JavaTypeFactoryImpl.java:84)
What is wrong with this example?
Seems that by just moving the Employee class a level above, ie. making it a sibling of the DummySchema class, makes the problem go away.
I think the way the org.apache.calcite.jdbc.JavaTypeFactoryImpl of Calcite is written doesn't handle Groovy's internal fields well.

How to mock the Data Stax Row object[com.datastax.driver.core.Row;] - Unit Test

Please find the below code for the DAO & Entity Object and Accessor
#Table(name = "Employee")
public class Employee {
#PartitionKey
#Column(name = "empname")
private String empname;
#ClusteringColumn(0)
#Column(name = "country")
private String country;
#Column(name = "status")
private String status;
}
Accessor:
#Accessor
public interface EmployeeAccessor {
#Query(value = "SELECT DISTINCT empname FROM EMPLOYEE ")
ResultSet getAllEmployeeName();
}
}
DAO getAllEmployeeNames returns a List which are employee names
and it will be sorted in ascending order.
DAO
public class EmployeeDAOImpl implements EmployeeDAO {
private EmployeeAccessor employeeAccessor;
#PostConstruct
public void init() {
employeeAccessor = datastaxCassandraTemplate.getAccessor(EmployeeAccessor.class);
}
#Override
public List<String> getAllEmployeeNames() {
List<Row> names = employeeAccessor.getAllEmployeeName().all();
List<String> empnames = names.stream()
.map(name -> name.getString("empname")).collect(Collectors.toList());
empnames.sort(naturalOrder()); //sorted
return empnames;
}
}
JUnit Test(mockito):
I am not able to mock the List[datastax row]. How to mock and returns a list of rows with values "foo" and "bar".Please help me in unit test this.
#Category(UnitTest.class)
#RunWith(MockitoJUnitRunner.class)
public class EmployeeDAOImplUnitTest {
#Mock
private ResultSet resultSet;
#Mock
private EmployeeAccessor empAccessor;
//here is the problem....how to mock the List<Row> Object --> com.datastax.driver.core.Row (interface)
//this code will result in compilation error as we are mapping a List<Row> to the ArrayList<String>
//how to mock the List<Row> with a list of String row object
private List<Row> unSortedTemplateNames = new ArrayList() {
{
add("foo");
add("bar");
}
};
//this is a test case to check if the results are sorted or not
//mock the accessor and send rows as "foo" & "bar"
//after calling the dao , the first element must be "bar" and not "foo"
#Test
public void shouldReturnSorted_getAllTemplateNames() {
when(empAccessor.getAllEmployeeName()).thenReturn(resultSet);
when(resultSet.all()).thenReturn(unSortedTemplateNames); //how to mock the List<Row> object ???
//i am testing if the results are sorted, first element should not be foo
assertThat(countryTemplates.get(0), is("bar"));
}
}
Wow! This is overly complex, hard to follow, and not an ideal way to write unit tests.
Using PowerMock(ito) along with "static" references in your own code is not recommended and is a sure sign of a code smells.
First, I am not sure why you decided to use a static reference (e.g. EmployeeAccessor.getAllEmployeeName().all(); inside the EmployeeDAOImpl class, getAllEmployeeNames() method) instead of using the instance variable (i.e. empAccessor), which is more conducive to actual "unit testing"?
The EmployeeAccessor, getAllEmployeeName() "interface" method is not static (clearly). However, seemingly, whatever this (datastaxCassandraTemplate.getAccessor(EmployeeAccessor.class);) generates makes it so (really?), which then requires the use of PowerMock(ito), o.O
Frameworks like PowerMock, and extensions of (i.e. "PowerMockito"), were meant to test and mock code used by your application (unfortunately, but necessarily so) where this "other" code makes use of statics, Singletons, private methods and so on. This anti-pattern really ought not be followed in your own application design.
Second, it is not really apparent what the "Subject Under Test" (SUT) is in your test case. You implemented a test class (i.e. EmployeeDAOImplTest) for, supposedly, your EmployeeDAOImpl class (the actual "SUT"), but inside your test case (i.e. shouldReturnSorted_getAllTemplateNames()), you are calling... countryLocalizationDAOImpl.getAllTemplateNames(); thus testing the CountryLocalizationDAOImpl class (??), which is not the "SUT" of the EmployeeDAOImplTest class.
Additionally, it is not apparent that the EmployeeDAOImpl even uses a CountryLocalizationDAO instance (assuming an interface here as well), and if it does, then it is certainly something that should be "mocked" when the EmployeeDAOImpl "interacts" with instances of CountryLocalizationDAO, particularly in the context of a unit test. The only correlation between the EmployeeDAO and CountryLocalizationDAO is that the Employee has a country field.
There are a few other problems with your design/setup as well, but anyway.
Here are a few suggestions...
First, let's test what your EmployeeDAOImplTest is meant to test... EmployeeDAO.getAllEmployeeNames() in a sorted fashion. This in turn may give you ideas of how to test your "CountryLocalizationDAO, getAllTemplateNames() method perhaps (if it even makes sense, i.e. getAllTemplateNames() is in fact dependent on an Employee's country, when Employees are ordered by name (i.e. "empname" and accessed via EmployeeAccessor).
public class EmployeeDAOImpl implements EmployeeDAO {
private final EmployeeAccessor employeeAccessor;
// where does the DataStaxCassandraTemplate reference come from?!
private DataStaxCassadraTemplate datastaxCassandraTemplate = ...;
public EmployeeDAOImpl() {
this(datastaxCassandraTemplate.getAccessor(EmployeeAccessor.class));
}
public EmployeeDAOImpl(EmployeeAccessor employeeAccessor) {
this.employeeAccessor = employeeAccessor;
}
protected EmployeeAccessor getEmployeeAccessor() {
return this.empAccessor;
}
public List<String> getAllEployeeNames() {
List<Row> nameRows = getEmployeeAccessor().getAllEmployeeName().all();
...
}
}
Then in your test class...
public class EmployeeDAOImplUnitTest {
#Mock
private EmployeeAccessor mockEmployeeAccessor;
// SUT
private EmployeeDAO employeeDao;
#Before
public void setup() {
employeeDao = new EmployeeDAOImpl(mockEmployeeAccessor);
}
protected ResultSet mockResultSet(Row... rows) {
ResultSet mockResultSet = mock(ResultSet.class);
when(mockResultSet.all()).thenReturn(Arrays.asList(rows));
return mockResultSet;
}
protected Row mockRow(String employeeName) {
Row mockRow = mock(Row.class, employeeName);
when(mockRow.getString(eq("empname")).thenReturn(employeeName);
return mockRow;
}
#Test
public void getAllEmployeeNamesReturnsSortListOfNames() {
when(mockEmployeeAccessor.getAllEmployeeName())
.thenReturn(mockResultSet(mockRow("jonDoe"), mockRow("janeDoe")));
assertThat(employeeDao.getAllEmployeeNames())
.contains("janeDoe", "jonDoe");
verify(mockEmployeeAccessor, times(1)).getAllEmployeeName();
}
}
Now, you can apply similar techniques if in fact there is an actual correlation between Employees and CountryLocalizationDAO via the EmployeeAccessor.
Hope this helps get you on a better track!
-j

cannot be able to use "as" key word in DynamicTableEntity (Azure Table)

I have an Azure table where I have inserted heterogeneous entities. After the retrieval, I want to convert them to some specific type using "as". I tried to do this, but it threw the following error:
Cannot be able to convert DynamicTableEntity to TestingEntity Via reference conversion, boxing conversion, unboxing conversion, wrapping conversion or null type conversion.
Is there any way I can convert my entities to a particular type?
My code is as follows:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the table client.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("TestingWithTableDatetime");
// Create the table if it doesn't exist.
table.CreateIfNotExists();
TableQuery<DynamicTableEntity> entityQuery =
new TableQuery<DynamicTableEntity>();
var employees = table.ExecuteQuery(entityQuery);
IEnumerable<DynamicTableEntity> entities = table.ExecuteQuery(entityQuery);
foreach (var e in entities)
{
EntityProperty entityTypeProperty;
if (e.Properties.TryGetValue("EntityType", out entityTypeProperty))
{
if (entityTypeProperty.StringValue == "SampleEntity1")
{
//Cannot be able to Use as
var TestingWithTableDatetime = e as SampleEntity1;
}
if (entityTypeProperty.StringValue == "SampleEntity2")
{
// Use entityTypeProperty, RowKey, PartitionKey, Etag, and Timestamp
}
if (entityTypeProperty.StringValue == "SampleEntity3")
{
// Use entityTypeProperty, RowKey, PartitionKey, Etag, and Timestamp
}
}
}
Class definition for Sample1
public class Sample1 : TableEntity
{
public Sample1(string pk, string rk)
{
this.PartitionKey = pk;
this.RowKey = rk;
EntityType = "MonitoringResources";
}
public string EntityType { get; set; }
public Sample1()
{
}
}
Things I have tried.I have created a class as Testing and in that I inherited Table entity.Then Testing is inherited by sample1 as follow
Testing Class definition
public class testing : TableEntity
{
public testing(string pk, string rk)
{
this.PartitionKey = pk;
this.RowKey = rk; //MetricKey
}
public string EntityType { get; set; }
public testing()
{
}
}
modified Class sample1:
public class sample1 : testing
{
public sample1(string pk, string rk) : base(pk, rk)
{
EntityType = "sample1";
}
public sample1()
{
}
}
In this i didnt get any error but
when I am converting it to sample1 by using "as" it returns as null.
Finally I ended with creating some helper.
public static class AzureManager
{
/// <summary>
/// Converts a dynamic table entity to .NET Object
/// </summary>
/// <typeparam name="TOutput">Desired Object Type</typeparam>
/// <param name="entity">Dynamic table Entity</param>
/// <returns>Output Object</returns>
public static TOutput ConvertTo<TOutput>(DynamicTableEntity entity)
{
return ConvertTo<TOutput>(entity.Properties, entity.PartitionKey, entity.RowKey);
}
/// <summary>
/// Convert a Dynamic Table Entity to A POCO .NET Object.
/// </summary>
/// <typeparam name="TOutput">Desired Object Types</typeparam>
/// <param name="properties">Dictionary of Table Entity</param>
/// <returns>.NET object</returns>
public static TOutput ConvertTo<TOutput>(IDictionary<string, EntityProperty> properties, string partitionKey, string rowKey)
{
var jobject = new JObject();
properties.Add("PartitionKey", new EntityProperty(partitionKey));
properties.Add("RowKey", new EntityProperty(rowKey));
foreach (var property in properties)
{
WriteToJObject(jobject, property);
}
return jobject.ToObject<TOutput>();
}
public static void WriteToJObject(JObject jObject, KeyValuePair<string, EntityProperty> property)
{
switch (property.Value.PropertyType)
{
case EdmType.Binary:
jObject.Add(property.Key, new JValue(property.Value.BinaryValue));
return;
case EdmType.Boolean:
jObject.Add(property.Key, new JValue(property.Value.BooleanValue));
return;
case EdmType.DateTime:
jObject.Add(property.Key, new JValue(property.Value.DateTime));
return;
case EdmType.Double:
jObject.Add(property.Key, new JValue(property.Value.DoubleValue));
return;
case EdmType.Guid:
jObject.Add(property.Key, new JValue(property.Value.GuidValue));
return;
case EdmType.Int32:
jObject.Add(property.Key, new JValue(property.Value.Int32Value));
return;
case EdmType.Int64:
jObject.Add(property.Key, new JValue(property.Value.Int64Value));
return;
case EdmType.String:
jObject.Add(property.Key, new JValue(property.Value.StringValue));
return;
default:
return;
}
}
}
the above one works for me.
var obj= AzureManager.ConvertTo<Sample1>(e);
If you find any other way.Please suggest.
Here is an alternative and much simpler solution for you that is natively supported by Azure Storage SDK version > 8.0.0. You do not even need to write any transformation / conversion code :)
Have a look at:
TableEntity.Flatten method: https://msdn.microsoft.com/en-us/library/azure/mt775434.aspx
TableEntity.ConvertBack method: https://msdn.microsoft.com/en-us/library/azure/mt775432.aspx
These methods are provided by the SDK as static, standalone helper methods. Flatten method will convert your entities to a flat dictionary of entity properties where you can simply assign a partition key and row key, create a dynamictableentity from the flat dictionary and write to azure table storage.
When you want to read the entity back, read it as dynamic table entity and pass the property dictionary of the returned dynamic table entity to TableEntity.ConvertBack method. Just tell it which type of object you want the method to convert the property dictionary into, via its generic type parameter and it will do the conversion for you.
I originally implemented these api s as nuget packages and now they are integrated into azure storage sdk. If you want to read a bit more about how they work you can see the article I wrote originally about the nuget packages here:
https://doguarslan.wordpress.com/2016/02/03/writing-complex-objects-to-azure-table-storage/
Is there any way I can convert my entities to a particular type?
We could use DynamicTableEntityConverter to do that.
According to your code, we could use the following code to covert DynamicTableEntity to Sample1
var TestingWithTableDatetime = DynamicTableEntityConverter.ConvertToPOCO<Sample1>(e);

NullPointerException in Custom Dstinct Mapper

i am using hazelcast 3.6.1 and implementing distinct aggregate functionality using custom mapreduce to get solr facet kind of results.
public class DistinctMapper implements Mapper<String, Employee, String, Long>{
private transient SimpleEntry<String, Employee> entry = new SimpleEntry<String, Employee>();
private static final Long ONE = Long.valueOf(1L);
private Supplier<String, Employee, String> supplier;
public DistinctMapper(Supplier<String, Employee, String> supplier) {
this.supplier = supplier;
}
#Override
public void map(String key, Employee value, Context<String, Long> context) {
System.out.println("Object "+ entry + " and key "+key);
entry.setKey(key);
entry.setValue(value);
String fieldValue = (String) supplier.apply(entry);
//getValue(value, fieldName);
if (null != fieldValue){
context.emit(fieldValue, ONE);
}
}
}
and mapper is failing with NullPointerException. and sysout statement says entry object is null.
SimpleEntry : https://github.com/hazelcast/hazelcast/blob/v3.7-EA/hazelcast/src/main/java/com/hazelcast/mapreduce/aggregation/impl/SimpleEntry.java
Can you point me the issue in the above code ? Thanks.
entry field is transient. This means that it is not serialized, so when DistinctMapperobject is deserialized on hazecalst node, it's value is null.
Removing the transient will solve the NullPointerException.
On the side note:
Why do you need this entry field? It doesn't seem to have any use.

What layer is responsible for implementing a LazyLoading strategy for children objects of an entity

Let's say you have an order as an aggregate root. An order contains one or more line items.
It is my understanding that it's the repository's responsibility to instantiate an order object when asked.
The line items can be loaded at the time of the order object's creation (eager loaded), or the line item collection can be populated when it is accessed by the client code (lazy loaded).
If we are using eager loading, it's seems that the repository code would take responsibility with hydrating the line items when the order is created.
However if we are using lazy loading, how is the repository called when the LineItems collection is accessed without creating a dependency on the repository from the order domain class?
Main problem is in Repository's ability to get only aggregate roots (presenting aggregates), thus you cannot use Repository to get line items. This can lead to aggregate encapsulation violation.
I propose something like:
//Domain level:
public interface IOrderItemList {
IEnumerable<OrderItem> GetItems();
}
public class Order {
private IOrderItemList _orderItems;
public IEnumerable<OrderItem> OrderItems
{ get { return _orderItems.GetItems() } };
public Order(IOrderItemList orderItems)
{
_orderItems = orderItems;
}
}
public class OrderItemList : IOrderItemList
{
private IList<OrderItem> _orderItems;
public IEnumerable<OrderItem> GetItems() {
return _orderItems; //or another logic
}
//other implementation details
}
//Data level
public class OrderItemListProxy : IOrderItemList
{
//link to 'real' object
private OrderItemList _orderItemList;
private int _orderId;
//alternatively:
//private OrderEntity _orderEntity;
//ORM context
private DbContext _context;
public OrderItemListProxy(int orderId, DbContext context)
{
_orderId = orderId;
_context = context;
}
public IEnumerable<OrderItem> GetItems() {
if (_orderItemList == null)
{
var orderItemEntities = DbContext.Orders
.Single(order => order.Id == _orderId).OrderItems;
var orderItems = orderItemEntites.Select(...);
//alternatively: use factory to create OrderItem from OrderItemEntity
_orderItemList = new OrderItemList(orderItems);
}
return _orderItemList.GetItems();
}
}
public class OrderRepository
{
//ORM context
private DbContext _context;
Order GetOrder(int id)
{
var orderEntity = _context.Single(order => order.Id == id);
var order = new Order(new OrderItemListProxy(id, _context))
//alternatively:
//var order = new Order(new OrderItemListProxy(orderEntity, _context))
...
//init other fields
...
}
//other methods
...
}
Most important here is that IOrderItemList corresponds to domain layer, but OrderItemListProxy corresponds to data layer.
Finally,
You may use IList<OrderItem> instead of custom IOrderItemList or another appropriate interface.
Proxy implementation may differ.
I don't provide best practicies for using db context, it may depend on technologies you use.

Resources