JpaPollingChannelAdapter with entity update at end - spring-integration

I am working on the integration of a Vendor software that uses for interface a DB table mimicking a queue.
Here is the JPA entity representation of such table:
#Data
#Entity
#Table(name = "TASK")
public static class Task implements Serializable {
#Id
#GeneratedValue(generator = "TaskId")
#SequenceGenerator(name = "TaskId", sequenceName = "TASK_SEQ", allocationSize = 50)
#Column(name = "ID")
private BigInteger id;
private Status status;
private LocalDate processedDate;
public enum Status {
NEW, PROCESSED, ERROR
}
}
I would like to use Spring integration to poll NEW records from this table, handle them (typical use case is to transfom and post on a JMS queue), and then udpate the record with either:
status PROCESSED if everything went fine
status ERROR if an exception occured
I tried to do so with JpaExecutor + JpaPollingChannelAdapter without much success. How would you recommend to tackle this case ?
Here is how I started:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.dsl.Pollers;
import org.springframework.integration.handler.GenericHandler;
import org.springframework.integration.jpa.core.JpaExecutor;
import org.springframework.integration.jpa.inbound.JpaPollingChannelAdapter;
import org.springframework.stereotype.Component;
#Component
#EnableIntegration
public class ExampleJob {
#Autowired
EntityManager entityManager;
#Bean
IntegrationFlow taskExecutorFlow() {
JpaExecutor selectExecutor = new JpaExecutor(entityManager);
selectExecutor.setJpaQuery("from Task where status = 'NEW'");
JpaPollingChannelAdapter adapter = new JpaPollingChannelAdapter(selectExecutor);
return IntegrationFlows
.from(adapter, c -> c.poller(Pollers.fixedDelay(Duration.ofMinutes(5))).autoStartup(true))
.handle((task, headers) -> task)
.get();
}

See a Jpa.outboundAdapter() as the next .handle() in your flow. This one is going to perform a MERGE on the entity in the message payload.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/jpa.html#jpa-outbound-channel-adapter
The poller() could be configured with a transaction to have the whole flow against retrieved entity transactional.

Related

How to manually stop or start the #InboundChannelAdapter by ourself?

Helo Everyone,
i am working on spring cloud data flow
i created the application and it is working fine but we have one requirement like it will be able to start/stop whenever we need..
if i keep autoStartup="false" it is not starting at first,but i dont know how to start or stop after that.
Most of the places having only xml config.
Tried some code with some online articles but it didn't work.
can anyone know how to resolve this and if is there any example means it will be really helpful.
Actually if autoStartup is false and using CommandLineRunner i can able to start the service.but the same thing if i tried with rest endpointit is throwing error.
Below is the code snippet.
below snippet is from my code.
package com.javatechie.service;
import com.javatechie.impl.ProductBuilder;
import com.javatechie.TbeSource;
import com.javatechie.model.Product;
import com.javatechie.stopping.StopPollingAdvice;
import lombok.SneakyThrows;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Scope;
import org.springframework.integration.annotation.EndpointId;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.integration.config.ExpressionControlBusFactoryBean;
import org.springframework.integration.core.MessageSource;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.handler.LoggingHandler;
import org.springframework.integration.scheduling.PollerMetadata;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.scheduling.support.PeriodicTrigger;
import org.springframework.stereotype.Component;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.Arrays;
import java.util.List;
import java.util.function.Supplier;
#Component
#EnableIntegration
#RestController
public class ProcessingCl {
#Autowired
ApplicationContext ac;
#Bean
#Scope("singleton")
private ProductBuilder dataAccess() {
return new ProductBuilder();
}
#Bean
#EndpointId("inboundtest")
#InboundChannelAdapter(channel = TbeSource.PR1, poller = #Poller(fixedDelay = "100", errorChannel = "errorchannel"),autoStartup = "false")
public Supplier<Product> getProductSource(ProductBuilder dataAccess)
{
return ()->dataAccesss.getNext();
}
#Bean
MessageChannel controlChannel() {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "controlChannel")
ExpressionControlBusFactoryBean controlBus() {
ExpressionControlBusFactoryBean expressionControlBusFactoryBean = new ExpressionControlBusFactoryBean();
return expressionControlBusFactoryBean;
}
#Bean
CommandLineRunner commandLineRunner(#Qualifier("controlChannel") MessageChannel controlChannel) {
return (String[] args) -> {
System.out.println("Starting incoming file adapter: ");
boolean sent = controlChannel.send(new GenericMessage<>("#inboundtest.start()"));
System.out.println("Sent control message successfully? " + sent);
while (System.in.available() == 0) {
Thread.sleep(50);
}
};
}
#GetMapping("/stop11")
void test() {
controlChannel().send(new GenericMessage<>("#inboundtest.stop()"));
System.out.println("it is in call method to stop");
}
#GetMapping("/start11")
void test1() {
controlChannel().send(new GenericMessage<>("#inboundtest.start()"));
System.out.println("it is in call method to start");
}
}
Following is the error message.
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'unknown.channel.name'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=#inboundtest.start(), headers={id=82c74865-2f68-c192-7d48-501bf3b28e02, timestamp=1608643036650}], failedMessage=GenericMessage [payload=#inboundtest.start(), headers={id=82c74865-2f68-c192-7d48-501bf3b28e02, timestamp=1608643036650}]] with root cause
org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
The control bus, #EndpointId and command as #inboundtest.start() is the way to go.
Only the problem that you don't show what is a subscriber for that TbeSource.PR1 channel.
That's probably where you are missing the point of Spring Integration: you start producing messages from that polling channel adapter, but there is no something like service activator having that TbeSource.PR1 channel as input.
I see you say it is Data Flow, but it is not clear from your description and configuration where is the binding for a TbeSource.PR1 destination. Do you have something like #EnableBinding and respective Source definition?

Junit for QueryDsl

I'm trying to write a test case for a query dsl, I'm getting null pointer exception when I run the test case
Dsl Class
QMyClass myClass= QMyClass.myClass;
queryFactory = new JPAQueryFactory(em);
JPAQuery<?> from = queryFactory.from(myClass);
JPAQuery<?> where = from
.where(prepdicates);
orderBy(orderSpecifier).offset(sortOrder.getOffset())
.limit(sortOrder.getPageSize());
**Junit test case:**
import javax.inject.Provider;
import javax.persistence.EntityManager;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.MockitoAnnotations;
import org.powermock.modules.junit4.PowerMockRunner;
import org.springframework.data.jpa.repository.support.QueryDslRepositorySupport;
import com.querydsl.core.types.OrderSpecifier;
import com.querydsl.core.types.Predicate;
import com.querydsl.jpa.JPQLTemplates;
import com.querydsl.jpa.impl.JPAQuery;
import com.querydsl.jpa.impl.JPAQueryFactory;
#RunWith(PowerMockRunner.class)
public class MyClass{
#Mock
QueryDslRepositorySupport queryDslRepositorySupport;
#Mock
EntityManager entityManager;
#Mock
JPAQueryFactory queryFactory;
#Mock
JPAQuery step1;
#InjectMocks
MyClass myClass;
#Before
public void setUp() {
MockitoAnnotations.initMocks(this);
Provider<EntityManager> provider = new Provider<EntityManager>() {
#Override
public EntityManager get() {
return entityManager;
}
};
queryFactory = new JPAQueryFactory(JPQLTemplates.DEFAULT, provider);
}
#SuppressWarnings({ "rawtypes", "unchecked" })
#Test
public void sampleTest() throws Exception {
QMyClass class= Mockito.mock(QMyClass.class);
Mockito.when(queryFactory.from(class)).thenReturn(step1);
Predicate step2 = Mockito.mock(Predicate.class);
Mockito.when(step1.where(step2)).thenReturn(step1);
OrderSpecifier step3 = Mockito.mock(OrderSpecifier.class);
Mockito.when(step1.orderBy(step3)).thenReturn(step1);
Mockito.when(step1.offset(Mockito.anyLong())).thenReturn(step1);
Mockito.when(step1.limit(Mockito.anyLong())).thenReturn(step1);
myClass.method("");
}
}
When I run this test case I'm getting null pointer exception at line number 2 in sampleTest() method. I googled but did't find any article for this, not sure why this NE, even after mocking the queryfacotry
Here is the trace :
java.lang.NullPointerException
at com.querydsl.core.DefaultQueryMetadata.addJoin(DefaultQueryMetadata.java:154)
at com.querydsl.core.support.QueryMixin.from(QueryMixin.java:163)
at com.querydsl.jpa.JPAQueryBase.from(JPAQueryBase.java:77)
at com.querydsl.jpa.impl.JPAQueryFactory.from(JPAQueryFactory.java:116)
at
As I mentioned in the comment, your problem is that you do not use a mock, but the real object instead.
Remove the queryFactory initialisation from your setup method.
#Before
public void setUp() {
MockitoAnnotations.initMocks(this);
Provider<EntityManager> provider = new Provider<EntityManager>() {
#Override
public EntityManager get() {
return entityManager;
}
};
// remove this line
// queryFactory = new JPAQueryFactory(JPQLTemplates.DEFAULT, provider);
}
Also you need to change your implementation, you can not use
queryFactory = new JPAQueryFactory(em); inside your code, as it can not be mocked.
What you could do instead is having a method that returns the JPAQueryFactory,
either from another class - which you can mock -
or in the same method, then you would need to spy on your class instead.
As you didnt add the code for the class you want to test (MyClass - hopefully a different one from the identical named UnitTest?), another possibility would be that you try to use Field or Constructor Injection (as indicated by the use of your annotations), but then there should not be an object creation for the JPAQueryFactory in your code at all.
This also seems to be wrong:
You should inject mocks into your class under test, not in your UnitTest class.
#RunWith(PowerMockRunner.class)
public class MyClass{
...
#InjectMocks
MyClass myClass;

spring-integration amqp outbound adapter race condition?

We've got a rather complicated spring-integration-amqp use case in one of our production applications and we've been seeing some "org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers" exceptions on startup. After the initial errors on startup, we don't see those exceptions anymore from the same components. This is seeming like some kind of startup race condition on components that depend on AMQP outbound adapters and that end up using them early in the lifecycle.
I can reproduce this by calling a gateway that sends to a channel wired to an outbound adapter in a PostConstruct method.
config:
package gadams;
import org.springframework.amqp.core.Queue;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.annotation.IntegrationComponentScan;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.dsl.amqp.Amqp;
import org.springframework.integration.dsl.channel.MessageChannels;
import org.springframework.messaging.MessageChannel;
#SpringBootApplication
#IntegrationComponentScan
public class RabbitRace {
public static void main(String[] args) {
SpringApplication.run(RabbitRace.class, args);
}
#Bean(name = "HelloOut")
public MessageChannel channelHelloOut() {
return MessageChannels.direct().get();
}
#Bean
public Queue queueHello() {
return new Queue("hello.q");
}
#Bean(name = "helloOutFlow")
public IntegrationFlow flowHelloOutToRabbit(RabbitTemplate rabbitTemplate) {
return IntegrationFlows.from("HelloOut").handle(Amqp.outboundAdapter(rabbitTemplate).routingKey("hello.q"))
.get();
}
}
gateway:
package gadams;
import org.springframework.integration.annotation.Gateway;
import org.springframework.integration.annotation.MessagingGateway;
#MessagingGateway
public interface HelloGateway {
#Gateway(requestChannel = "HelloOut")
void sendMessage(String message);
}
component:
package gadams;
import javax.annotation.PostConstruct;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.DependsOn;
import org.springframework.stereotype.Component;
#Component
#DependsOn("helloOutFlow")
public class HelloPublisher {
#Autowired
private HelloGateway helloGateway;
#PostConstruct
public void postConstruct() {
helloGateway.sendMessage("hello");
}
}
In my production use case, we have a component with a PostConstruct method where we're using a TaskScheduler to schedule a bunch of components with some that depend on AMQP outbound adapters, and some of those end up executing immediately. I've tried putting bean names on the IntegrationFlows that involve an outbound adapter and using #DependsOn on the beans that use the gateways and/or the gateway itself, but that doesn't get rid of the errors on startup.
That everything called Lifecycle. Any Spring Integration endpoints start listen for or produce messages only when their start() is performed.
Typically for standard default autoStartup = true it is done in the ApplicationContext.finishRefresh(); as a
// Propagate refresh to lifecycle processor first.
getLifecycleProcessor().onRefresh();
To start producing messages to the channel from the #PostConstruct (afterPropertiesSet()) is really very early, because it is does far away from the finishRefresh().
You really should reconsider your producing logic and that implementation into SmartLifecycle.start() phase.
See more info in the Reference Manual.

Spring Batch thread-safe Map job repository

the Spring Batch docs say of the Map-backed job repository:
Note that the in-memory repository is volatile and so does not allow restart between JVM instances. It also cannot guarantee that two job instances with the same parameters are launched simultaneously, and is not suitable for use in a multi-threaded Job, or a locally partitioned Step. So use the database version of the repository wherever you need those features.
I would like to use a Map job repository, and I do not care about restarting, prevention of concurrent job executions, etc. but I do care about being able to use multi-threading and local partitioning.
My batch application has some partitioned steps, and at first glance it seems to run just fine with a Map-backed job repository.
What is the reason it said to be not possible with MapJobRepositoryFactoryBean? Looking at the implementation of Map DAOs, they are using ConcurrentHashMap. Is this not thread-safe ?
I would advise you to follow the documentation, rather than relying on implementation details. Even if the maps are individually thread-safe, there might be race conditions in changes than involve more than one of these maps.
You can use an in-memory database very easily. Example
#Grapes([
#Grab('org.springframework:spring-jdbc:4.0.5.RELEASE'),
#Grab('com.h2database:h2:1.3.175'),
#Grab('org.springframework.batch:spring-batch-core:3.0.6.RELEASE'),
// must be passed with -cp, for whatever reason the GroovyClassLoader
// is not used for com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver
//#Grab('org.codehaus.jettison:jettison:1.2'),
])
import org.h2.jdbcx.JdbcDataSource
import org.springframework.batch.core.Job
import org.springframework.batch.core.JobParameters
import org.springframework.batch.core.Step
import org.springframework.batch.core.StepContribution
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory
import org.springframework.batch.core.launch.JobLauncher
import org.springframework.batch.core.scope.context.ChunkContext
import org.springframework.batch.core.step.tasklet.Tasklet
import org.springframework.batch.repeat.RepeatStatus
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.context.annotation.AnnotationConfigApplicationContext
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.core.io.ResourceLoader
import org.springframework.jdbc.datasource.init.DatabasePopulatorUtils
import org.springframework.jdbc.datasource.init.ResourceDatabasePopulator
import javax.annotation.PostConstruct
import javax.sql.DataSource
#Configuration
#EnableBatchProcessing
class AppConfig {
#Autowired
private JobBuilderFactory jobs
#Autowired
private StepBuilderFactory steps
#Bean
public Job job() {
return jobs.get("myJob").start(step1()).build()
}
#Bean
Step step1() {
this.steps.get('step1')
.tasklet(new MyTasklet())
.build()
}
#Bean
DataSource dataSource() {
new JdbcDataSource().with {
url = 'jdbc:h2:mem:temp_db;DB_CLOSE_DELAY=-1'
user = 'sa'
password = 'sa'
it
}
}
#Bean
BatchSchemaPopulator batchSchemaPopulator() {
new BatchSchemaPopulator()
}
}
class BatchSchemaPopulator {
#Autowired
ResourceLoader resourceLoader
#Autowired
DataSource dataSource
#PostConstruct
void init() {
def populator = new ResourceDatabasePopulator()
populator.addScript(
resourceLoader.getResource(
'classpath:/org/springframework/batch/core/schema-h2.sql'))
DatabasePopulatorUtils.execute populator, dataSource
}
}
class MyTasklet implements Tasklet {
#Override
RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
println 'TEST!'
}
}
def ctx = new AnnotationConfigApplicationContext(AppConfig)
def launcher = ctx.getBean(JobLauncher)
def jobExecution = launcher.run(ctx.getBean(Job), new JobParameters([:]))
println "Status is: ${jobExecution.status}"

Does Spring-data-Cassandra 1.3.2.RELEASE support UDT annotations?

Is #UDT (http://docs.datastax.com/en/developer/java-driver/2.1/java-driver/reference/mappingUdts.html) supported by Spring-data-Cassandra 1.3.2.RELEASE? If not, how can I add workaround for this
Thanks
See the details here:
https://jira.spring.io/browse/DATACASS-172
I faced with the same issue and it sounds like it does not(
debug process shows me that spring data cassandra check for
#Table, #Persistent or #PrimaryKeyClass Annotation only and raise exception
in other case
>
Invocation of init method failed; nested exception is org.springframework.data.cassandra.mapping.VerifierMappingExceptions:
Cassandra entities must have the #Table, #Persistent or #PrimaryKeyClass Annotation
But I found the solution.
I figured out the approach that allows me to manage entities that include UDT and the ones that don't. In my application I use spring cassandra data project together with using of direct datastax core driver. The repositories that don't contain object with UDT use spring cassanta data approach and the objects that include UDT use custom repositories.
Custom repositories use datastax mapper and they work correctly with UDT
(they located in separate package, see notes below why it's needed):
package com.fyb.cassandra.custom.repositories.impl;
import java.util.List;
import java.util.UUID;
import javax.annotation.PostConstruct;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.cassandra.config.CassandraSessionFactoryBean;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.mapping.Mapper;
import com.datastax.driver.mapping.MappingManager;
import com.datastax.driver.mapping.Result;
import com.google.common.collect.Lists;
import com.fyb.cassandra.custom.repositories.AccountDeviceRepository;
import com.fyb.cassandra.dto.AccountDevice;
public class AccountDeviceRepositoryImpl implements AccountDeviceRepository {
#Autowired
public CassandraSessionFactoryBean session;
private Mapper<AccountDevice> mapper;
#PostConstruct
void initialize() {
mapper = new MappingManager(session.getObject()).mapper(AccountDevice.class);
}
#Override
public List<AccountDevice> findAll() {
return fetchByQuery("SELECT * FROM account_devices");
}
#Override
public void save(AccountDevice accountDevice) {
mapper.save(accountDevice);
}
#Override
public void deleteByConditions(UUID accountId, UUID systemId, UUID deviceId) {
final String query = "DELETE FROM account_devices where account_id =" + accountId + " AND system_id=" + systemId
+ " AND device_id=" + deviceId;
session.getObject().execute(query);
}
#Override
public List<AccountDevice> findByAccountId(UUID accountId) {
final String query = "SELECT * FROM account_devices where account_id=" + accountId;
return fetchByQuery(query);
}
/*
* Take any valid CQL query and try to map result set to the given list of appropriates <T> types.
*/
private List<AccountDevice> fetchByQuery(String query) {
ResultSet results = session.getObject().execute(query);
Result<AccountDevice> accountsDevices = mapper.map(results);
List<AccountDevice> result = Lists.newArrayList();
for (AccountDevice accountsDevice : accountsDevices) {
result.add(accountsDevice);
}
return result;
}
}
And the spring data related repos that resonsible for managing entities that don't include UDT objects looks like as follows:
package com.fyb.cassandra.repositories;
import org.springframework.data.cassandra.repository.CassandraRepository;
import com.fyb.cassandra.dto.AccountUser;
import org.springframework.data.cassandra.repository.Query;
import org.springframework.stereotype.Repository;
import java.util.List;
import java.util.UUID;
#Repository
public interface AccountUserRepository extends CassandraRepository<AccountUser> {
#Query("SELECT * FROM account_users WHERE account_id=?0")
List<AccountUser> findByAccountId(UUID accountId);
}
I've tested this solution and it's works 100%.
In addition I've attached my POJO objects:
Pojo that uses only data stax annatation:
package com.fyb.cassandra.dto;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.Frozen;
import com.datastax.driver.mapping.annotations.FrozenValue;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
#Table(name = "account_systems")
public class AccountSystem {
#PartitionKey
#Column(name = "account_id")
private java.util.UUID accountId;
#ClusteringColumn
#Column(name = "system_id")
private java.util.UUID systemId;
#Frozen
private Location location;
#FrozenValue
#Column(name = "user_token")
private List<UserToken> userToken;
#Column(name = "product_type_id")
private int productTypeId;
#Column(name = "serial_number")
private String serialNumber;
}
Pojo without using UDT and using only spring data cassandra framework:
package com.fyb.cassandra.dto;
import java.util.Date;
import java.util.UUID;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;
#Table(value = "accounts")
public class Account {
#PrimaryKeyColumn(name = "account_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)
private java.util.UUID accountId;
#Column(value = "account_name")
private String accountName;
#Column(value = "currency")
private String currency;
}
Note, that the entities below use different annotations:
#PrimaryKeyColumn(name = "account_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)and #PartitionKey
#ClusteringColumn and #PrimaryKeyColumn(name = "area_parent_id", ordinal = 2, type = PrimaryKeyType.CLUSTERED)
At first glance - it's uncomfortable, but it allows you to work with objects that includes UDT and that don't.
One important note. That two repos(that use UDT and don't should reside in different packages) cause Spring config looking for base packages with repos:
#Configuration
#EnableCassandraRepositories(basePackages = {
"com.fyb.cassandra.repositories" })
public class CassandraConfig {
..........
}
User Defined data type is now supported by Spring Data Cassandra. The latest release 1.5.0.RELEASE uses Cassandra Data stax driver 3.1.3 and hence its working now. Follow the below steps to make it working
How to use UserDefinedType(UDT) feature with Spring Data Cassandra :
We need to use the latest jar of Spring data Cassandra (1.5.0.RELEASE)
group: 'org.springframework.data', name: 'spring-data-cassandra', version: '1.5.0.RELEASE'
Make sure it uses below versions of the jar :
datastax.cassandra.driver.version=3.1.3
spring.data.cassandra.version=1.5.0.RELEASE
spring.data.commons.version=1.13.0.RELEASE
spring.cql.version=1.5.0.RELEASE
Create user defined type in Cassandra : The type name should be same as defined in the POJO class
Address data type
CREATE TYPE address_type (
id text,
address_type text,
first_name text,
phone text
);
Create column-family with one of the columns as UDT in Cassandra:
Employee table:
CREATE TABLE employee(
employee_id uuid,
employee_name text,
address frozen,
primary key (employee_id, employee_name)
);
In the domain class, define the field with annotation -CassandraType and DataType should be UDT:
#Table("employee") public class Employee {
-- othere fields--
#CassandraType(type = DataType.Name.UDT, userTypeName = "address_type")
private Address address;
}
Create domain class for the user defined type : We need to make sure that column name in the user defined type schema
has to be same as field name in the domain class.
#UserDefinedType("address_type") public class Address { #CassandraType(type = DataType.Name.TEXT)
private String id; #CassandraType(type = DataType.Name.TEXT) private String address_type; }
In the Cassandra Config, Change this :
#Bean public CassandraMappingContext mappingContext() throws Exception {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), cassandraKeyspace));
return mappingContext;
}
User defined type should have the same name across everywhere. for e.g
#UserDefinedType("address_type")
#CassandraType(type = DataType.Name.UDT, userTypeName = "address_type")
CREATE TYPE address_type

Resources