How to use compact serialization in hazelcast in kubernetes cluster - hazelcast

I tried to follow this tutorial with compact serialization but I don't know what is the workflow when I want to use it with custom CompactSerializer
So I have object Employee which I want to de/serialize. When I installing hazelcast in kubernetes I need to add jar with this class. Now I want add field which should be supported in schema evolution:
Compact serialization permits schemas and classes to evolve by adding
or removing fields, or by changing the types of fields. More than one
version of a class may live in the same cluster and different clients
or members might use different versions of the class.
but how I can add this class to running hazelcast in kubernetes without reinstalling it ?
When I add this serializer in my application
hzConfig.getSerializationConfig().getCompactSerializationConfig().setEnabled(true);
hzConfig.getSerializationConfig().getCompactSerializationConfig().register(Employee.class, "employee",
new EmployeeSerializer());
this will be used just in serialization inside hazelcast cluster. But when I want to deserialize object from hazelcast client and I did not use this serializer I got exception that GenericRecord can not be cast to Employee
So I am curious if there is tutorial of workflow how to use this compact serialization with custom object

Related

How to access schema in a Kafka consumer when using schema registry?

I'm integrating Kafka in our microservices architecture. We're using Karaspace as the schema registry, and protobuf as data format. So in producer microservice, there's a .proto file defined underlining the schema to be pushed and I've created corresponding typescript interfaces using ts-node.
On the consumer side, schema registry will fetch the schema associated to the received data to validate and deserialise the data. But how do I access the corresponding interfaces in consuming microservice, so as to implement type checking?
Direct way seems to be writing interfaces for the expected response data beforehand. But then it will hamper schema evolution and I'll be back to square one.
writing interfaces for the expected response data beforehand
Yes, but you can also download them, not re-write. I.e. your producer code (assuming also Typescript) can be responsible for publishing the .d.ts types to common NPM registry, which the consumer adds as a dependency.
Or you can setup NPM/yarn pre-build hooks to download from the registry and run necessary protoc commands to compile the schema, similar to Confluent's own Registry Maven/Gradle plugins + Avro/Protobuf plugins.

Create a custom LoadBalancing Policy for spark cassandra connector

I know that the spark-cassandra connector comes with its own default loadbalancing policy implementation(DefaultLoadBalancingPolicy). How can I go about implementing my own custom LoadBalancing class? I want to have the application use the WhiteListRoundRobin policy. What steps would I need to take? I'm still a newbie in working with spark and Cassandra and I would appreciate any guidance in this. Thanks
You can look into implementation of LocalNodeFirstLoadBalancingPolicy - basically you need to create (if it doesn't exist) a class inherited from LoadBalancingPolicy, and implement your required logic for load balancing.
Then you need to create a class implementing CassandraConnectionFactory that will configure Cassandra session with required load balancing implementation. The simplest way is to take the code of DefaultConnectionFactory, but instead of using LocalNodeFirstLoadBalancingPolicy, specify your load balancing class.
And then you specify that connection factory class name in the spark.cassandra.connection.factory configuration property.

Liferay Service Builder : how to use hibernate session factory for relation

In my project i have multiple liferay plugin portlets. I have used single plugin in portlet having service builder. All other plugin portlets are using same service builder portlet.
Ex:
Portlet1, Poertlet2, Portlet3 and ServiceBuilder portlet. Portlet1, Poertlet2, Portlet3 are using same ServiceBuilder portlet.
This service builder is connected to external database. And i am inserting/fetching data from this external database. There are one to many and many to one relationship structure in database. I want to use hibernate relationship model for these relationship and run complex queries to fetch data. So i want to use hibernate session factory in my service builder.
Please give your valuable advice or code so that i can do this as per the requirement.
Please note:
1. I have read about liferay relationship in tables. But this has not work as per my requirement.
2. Most of tables are managed by other application. I am using their data only.
Service builder does not work this way and if you want the relation you should not use it.
The idea behind service builder is to have an easy DB access layer - entity at a time and the relations are resolved in the business logic.
If you want that the relations are handled by the persistence layer you need to use plain hibernate.

Sharing Hazelcast Cluster across multiple development team

Read from a few places which indicate mapstore and maploader must be running with the Hazelcast Node. Would like to find out if there any ways to implement mapstore/maploader separate from the Hazelcast Node?
Background:
If i have a hazelcast cluster for the team, and this cluster is to be use by different sub-team providing different map as data, and each sub-team should implement mapstore/maploader for the map they own, how can this be done? (Note that each sub-team have their own SVN repository)
Thanks in advance~
MapLoader's load() operation is only invoked on the node that would have the key when the key is missing, so there is no way to push this processing elsewhere.
However, each map can have a different MapStore/MapLoader implementation, so having a different team provide each is certainly feasible.
Exactly how you achieve this comes down to your build and deploy practices. For example, each team's classes could be in a separate jar file on the classpath. Or, there could be a single jar file constructed containing the classes provided by each team. Many ways exist!

ASP.NET MVC & Entity Framework

We have a SQL Server with multiple database (different schema) and i need to develop an application in ASP.NET MVC & Entity Framework to connect to any of these database in runtime and perform DML operations. If a new database is added to the SQL Server then the application should able to connect to this new database without any configuration/code change. I am exactly looking for DML operation handled by myLittleAdmin
Can anyone advice me on this please
Unfortunately you won't be able to do this with Entity Framework. Entity Framework operates in two different instances, Code-first & Database-first.
The former being writing your relevant model classes for your data first and then Entity Framework will work out a database schema based upon those classes handling foreign key references etc automatically based on the references between models.
The latter is effectively the opposite, you define the database schema and Entity Framework produces the model classes from this. I have always found this method to be cleaner as you have more control over the database structure and I find it more difficult to make mistakes.
In your situation Entity Framework needs to know before hand the structure of the database to enable it to read from it so it cannot "connect to this new database without any configuration/code change". If you can provide more information about what you are wanting then I may be able to help.
I recently created an asset management system which could store generic assets which would traditionally be stored in separate tables, this solution was developed using Entity Framework but the database was designed in such a way that it could handle generic asset objects and store them i.e. if we created a new asset type then no database change was required hence there was no code change for Entity Framework either.

Resources