Does Spring-data-Cassandra 1.3.2.RELEASE support UDT annotations? - cassandra

Is #UDT (http://docs.datastax.com/en/developer/java-driver/2.1/java-driver/reference/mappingUdts.html) supported by Spring-data-Cassandra 1.3.2.RELEASE? If not, how can I add workaround for this
Thanks

See the details here:
https://jira.spring.io/browse/DATACASS-172
I faced with the same issue and it sounds like it does not(
debug process shows me that spring data cassandra check for
#Table, #Persistent or #PrimaryKeyClass Annotation only and raise exception
in other case
>
Invocation of init method failed; nested exception is org.springframework.data.cassandra.mapping.VerifierMappingExceptions:
Cassandra entities must have the #Table, #Persistent or #PrimaryKeyClass Annotation
But I found the solution.
I figured out the approach that allows me to manage entities that include UDT and the ones that don't. In my application I use spring cassandra data project together with using of direct datastax core driver. The repositories that don't contain object with UDT use spring cassanta data approach and the objects that include UDT use custom repositories.
Custom repositories use datastax mapper and they work correctly with UDT
(they located in separate package, see notes below why it's needed):
package com.fyb.cassandra.custom.repositories.impl;
import java.util.List;
import java.util.UUID;
import javax.annotation.PostConstruct;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.cassandra.config.CassandraSessionFactoryBean;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.mapping.Mapper;
import com.datastax.driver.mapping.MappingManager;
import com.datastax.driver.mapping.Result;
import com.google.common.collect.Lists;
import com.fyb.cassandra.custom.repositories.AccountDeviceRepository;
import com.fyb.cassandra.dto.AccountDevice;
public class AccountDeviceRepositoryImpl implements AccountDeviceRepository {
#Autowired
public CassandraSessionFactoryBean session;
private Mapper<AccountDevice> mapper;
#PostConstruct
void initialize() {
mapper = new MappingManager(session.getObject()).mapper(AccountDevice.class);
}
#Override
public List<AccountDevice> findAll() {
return fetchByQuery("SELECT * FROM account_devices");
}
#Override
public void save(AccountDevice accountDevice) {
mapper.save(accountDevice);
}
#Override
public void deleteByConditions(UUID accountId, UUID systemId, UUID deviceId) {
final String query = "DELETE FROM account_devices where account_id =" + accountId + " AND system_id=" + systemId
+ " AND device_id=" + deviceId;
session.getObject().execute(query);
}
#Override
public List<AccountDevice> findByAccountId(UUID accountId) {
final String query = "SELECT * FROM account_devices where account_id=" + accountId;
return fetchByQuery(query);
}
/*
* Take any valid CQL query and try to map result set to the given list of appropriates <T> types.
*/
private List<AccountDevice> fetchByQuery(String query) {
ResultSet results = session.getObject().execute(query);
Result<AccountDevice> accountsDevices = mapper.map(results);
List<AccountDevice> result = Lists.newArrayList();
for (AccountDevice accountsDevice : accountsDevices) {
result.add(accountsDevice);
}
return result;
}
}
And the spring data related repos that resonsible for managing entities that don't include UDT objects looks like as follows:
package com.fyb.cassandra.repositories;
import org.springframework.data.cassandra.repository.CassandraRepository;
import com.fyb.cassandra.dto.AccountUser;
import org.springframework.data.cassandra.repository.Query;
import org.springframework.stereotype.Repository;
import java.util.List;
import java.util.UUID;
#Repository
public interface AccountUserRepository extends CassandraRepository<AccountUser> {
#Query("SELECT * FROM account_users WHERE account_id=?0")
List<AccountUser> findByAccountId(UUID accountId);
}
I've tested this solution and it's works 100%.
In addition I've attached my POJO objects:
Pojo that uses only data stax annatation:
package com.fyb.cassandra.dto;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.Frozen;
import com.datastax.driver.mapping.annotations.FrozenValue;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
#Table(name = "account_systems")
public class AccountSystem {
#PartitionKey
#Column(name = "account_id")
private java.util.UUID accountId;
#ClusteringColumn
#Column(name = "system_id")
private java.util.UUID systemId;
#Frozen
private Location location;
#FrozenValue
#Column(name = "user_token")
private List<UserToken> userToken;
#Column(name = "product_type_id")
private int productTypeId;
#Column(name = "serial_number")
private String serialNumber;
}
Pojo without using UDT and using only spring data cassandra framework:
package com.fyb.cassandra.dto;
import java.util.Date;
import java.util.UUID;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;
#Table(value = "accounts")
public class Account {
#PrimaryKeyColumn(name = "account_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)
private java.util.UUID accountId;
#Column(value = "account_name")
private String accountName;
#Column(value = "currency")
private String currency;
}
Note, that the entities below use different annotations:
#PrimaryKeyColumn(name = "account_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)and #PartitionKey
#ClusteringColumn and #PrimaryKeyColumn(name = "area_parent_id", ordinal = 2, type = PrimaryKeyType.CLUSTERED)
At first glance - it's uncomfortable, but it allows you to work with objects that includes UDT and that don't.
One important note. That two repos(that use UDT and don't should reside in different packages) cause Spring config looking for base packages with repos:
#Configuration
#EnableCassandraRepositories(basePackages = {
"com.fyb.cassandra.repositories" })
public class CassandraConfig {
..........
}

User Defined data type is now supported by Spring Data Cassandra. The latest release 1.5.0.RELEASE uses Cassandra Data stax driver 3.1.3 and hence its working now. Follow the below steps to make it working
How to use UserDefinedType(UDT) feature with Spring Data Cassandra :
We need to use the latest jar of Spring data Cassandra (1.5.0.RELEASE)
group: 'org.springframework.data', name: 'spring-data-cassandra', version: '1.5.0.RELEASE'
Make sure it uses below versions of the jar :
datastax.cassandra.driver.version=3.1.3
spring.data.cassandra.version=1.5.0.RELEASE
spring.data.commons.version=1.13.0.RELEASE
spring.cql.version=1.5.0.RELEASE
Create user defined type in Cassandra : The type name should be same as defined in the POJO class
Address data type
CREATE TYPE address_type (
id text,
address_type text,
first_name text,
phone text
);
Create column-family with one of the columns as UDT in Cassandra:
Employee table:
CREATE TABLE employee(
employee_id uuid,
employee_name text,
address frozen,
primary key (employee_id, employee_name)
);
In the domain class, define the field with annotation -CassandraType and DataType should be UDT:
#Table("employee") public class Employee {
-- othere fields--
#CassandraType(type = DataType.Name.UDT, userTypeName = "address_type")
private Address address;
}
Create domain class for the user defined type : We need to make sure that column name in the user defined type schema
has to be same as field name in the domain class.
#UserDefinedType("address_type") public class Address { #CassandraType(type = DataType.Name.TEXT)
private String id; #CassandraType(type = DataType.Name.TEXT) private String address_type; }
In the Cassandra Config, Change this :
#Bean public CassandraMappingContext mappingContext() throws Exception {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), cassandraKeyspace));
return mappingContext;
}
User defined type should have the same name across everywhere. for e.g
#UserDefinedType("address_type")
#CassandraType(type = DataType.Name.UDT, userTypeName = "address_type")
CREATE TYPE address_type

Related

JpaPollingChannelAdapter with entity update at end

I am working on the integration of a Vendor software that uses for interface a DB table mimicking a queue.
Here is the JPA entity representation of such table:
#Data
#Entity
#Table(name = "TASK")
public static class Task implements Serializable {
#Id
#GeneratedValue(generator = "TaskId")
#SequenceGenerator(name = "TaskId", sequenceName = "TASK_SEQ", allocationSize = 50)
#Column(name = "ID")
private BigInteger id;
private Status status;
private LocalDate processedDate;
public enum Status {
NEW, PROCESSED, ERROR
}
}
I would like to use Spring integration to poll NEW records from this table, handle them (typical use case is to transfom and post on a JMS queue), and then udpate the record with either:
status PROCESSED if everything went fine
status ERROR if an exception occured
I tried to do so with JpaExecutor + JpaPollingChannelAdapter without much success. How would you recommend to tackle this case ?
Here is how I started:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.dsl.Pollers;
import org.springframework.integration.handler.GenericHandler;
import org.springframework.integration.jpa.core.JpaExecutor;
import org.springframework.integration.jpa.inbound.JpaPollingChannelAdapter;
import org.springframework.stereotype.Component;
#Component
#EnableIntegration
public class ExampleJob {
#Autowired
EntityManager entityManager;
#Bean
IntegrationFlow taskExecutorFlow() {
JpaExecutor selectExecutor = new JpaExecutor(entityManager);
selectExecutor.setJpaQuery("from Task where status = 'NEW'");
JpaPollingChannelAdapter adapter = new JpaPollingChannelAdapter(selectExecutor);
return IntegrationFlows
.from(adapter, c -> c.poller(Pollers.fixedDelay(Duration.ofMinutes(5))).autoStartup(true))
.handle((task, headers) -> task)
.get();
}
See a Jpa.outboundAdapter() as the next .handle() in your flow. This one is going to perform a MERGE on the entity in the message payload.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/jpa.html#jpa-outbound-channel-adapter
The poller() could be configured with a transaction to have the whole flow against retrieved entity transactional.

Is there a way to pass list of enums to step in cucumber 4.x and java

let's say I have example enum class
public enum Name { FIRST_NAME, LAST_NAME;}
and I have a such step
Then followed name types are listed:
| FIRST_NAME |
| LAST_NAME |
in which I want to pass List like
#Then("^followed name types are listed:$")
public void followedNameTypesAreListed(List<Name> nameTypes){...}
I'm currently migrating to cucumber 4.x and what i figured out is that i can register custom DataTableType like
typreRegistry.defineDataTableType(new DataTableType(Name.class,
(TableCellTransformer<Name>) Name::valueOf)
but doing it for every single enum class doesn't sound very efficient, isn't there any other way to handle list for any enum class?
One quick way to do this would be to us an object mapper as the default cell transformer. The object mapper will then be used in all situations where a cell is mapped to a single object and no existing data table type has been defined.
You could use jackson-databind for this.
In Cucumber v4:
package com.example.app;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.cucumber.core.api.TypeRegistry;
import io.cucumber.core.api.TypeRegistryConfigurer;
import java.util.Locale;
public class ParameterTypes implements TypeRegistryConfigurer {
private final ObjectMapper objectMapper = new ObjectMapper();
#Override
public Locale locale() {
return Locale.ENGLISH;
}
#Override
public void configureTypeRegistry(TypeRegistry typeRegistry) {
typeRegistry.setDefaultDataTableCellTransformer(objectMapper::convertValue);
}
}
And in v5:
package com.example.app;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.cucumber.java.DefaultDataTableCellTransformer;
import java.lang.reflect.Type;
public class DataTableSteps {
private final ObjectMapper objectMapper = new ObjectMapper();
#DefaultDataTableCellTransformer
public Object defaultTransformer(Object fromValue, Type toValueType) {
return objectMapper.convertValue(fromValue, objectMapper.constructType(toValueType));
}
}

TableView with different objects (javafx)

Im currently developing a application for watching who is responsible for different Patients, however i havent been able to solve how to fill a table with different object types.
Below is my code for my TableView controller. The TableView will end up with four different object typs, all will be retrieved from a database.
I want my table to hold Patient objects, User objects (responsible) and a RelationManager object.
Below is my code, if you need more of the code, please let me know :-).
package fird.presentation;
import fird.Patient;
import fird.RelationManager;
import fird.User;
import fird.data.DAOFactory;
import fird.data.DataDAO;
import java.net.URL;
import java.util.Arrays;
import java.util.List;
import java.util.ResourceBundle;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.fxml.FXML;
import javafx.fxml.Initializable;
import javafx.scene.control.Button;
import javafx.scene.control.TableColumn;
import javafx.scene.control.TableView;
import javafx.scene.control.TextField;
import javafx.scene.control.cell.PropertyValueFactory;
/**
* FXML Controller class
*
* #author SimonKragh
*/
public class KMAMainFrameOverviewController implements Initializable {
#FXML
private TextField txtCPRKMAMainFrame;
#FXML
private TableColumn<Patient, String> TableColumnCPR;
#FXML
private TableColumn<Patient, String> TableColumnFirstname;
#FXML
private TableColumn<Patient, String> TableColumnSurname;
#FXML
private TableColumn<User, String> TableColumnResponsible;
#FXML
private TableColumn<RelationManager, String> TableColumnLastEdited;
#FXML
private TableView<RelationManager> tblPatients;
#FXML
private Button btnShowHistory;
#FXML
private TableColumn<?, ?> TableColumnDepartment;
/**
* Initializes the controller clas #FXML private Button btnShowHistory;
*
* #FXML private TableColumn<?, ?> TableColumnDepartment; s.
*/
#Override
public void initialize(URL url, ResourceBundle rb) {
// Start of logic for the KMAMainFrameOverviewController
DataDAO dao = DAOFactory.getDataDao();
TableColumnCPR.setCellValueFactory(new PropertyValueFactory<Patient, String>("CPR"));
TableColumnFirstname.setCellValueFactory(new PropertyValueFactory<Patient, String>("Firstname"));
TableColumnSurname.setCellValueFactory(new PropertyValueFactory<Patient, String>("Surname"));
TableColumnResponsible.setCellValueFactory(new PropertyValueFactory<User, String>("Responsible"));
TableColumnLastEdited.setCellValueFactory(new PropertyValueFactory<RelationManager, String>("Last Edited"));
ObservableList<RelationManager> relationData = FXCollections.observableArrayList(dao.getAllActiveRelations());
tblPatients.setItems(relationData);
tblPatients.getColumns().addAll(TableColumnCPR, TableColumnFirstname, TableColumnSurname, TableColumnResponsible, TableColumnLastEdited);
System.out.println(tblPatients.getItems().toString());
}
}
relationData is a RelationManager object returned. This object contains a User object, a Patient object and a Responsible object.
Best,
Simon.
The exact details of how you do this depend on your requirements: for example, for a given RelationManager object, do the User, Patient, or Responsible objects associated with it ever change? Do you need the table to be editable?
But the basic idea is that each row in the table represents some RelationManager, so the table type is TableView<RelationManager>. Each column displays a value of some type (call it S), so each column is of type TableColumn<RelationManager, S>, where S might vary from one column to the next.
The cell value factory is an object that specifies how to get from the RelationManager object to an observable value of type S. The exact way you do this depends on how your model classes are set up.
If the individual objects associated with a given RelationManager never change (e.g. the Patient for a given RelationManager is always the same), then it's pretty straightforward. Assuming you have the usual setup for Patient:
public class Patient {
private StringProperty firstName = new SimpleStringProperty(...);
public StringProperty firstNameProperty() {
return firstName ;
}
public String getFirstName() {
return firstName.get();
}
public void setFirstName(String firstName) {
this.firstName.set(firstName);
}
// etc etc
}
then you can just do
TableColumn<RelationManager, String> firstNameColumn = new TableColumn<>("First Name");
firstNameColumn.setCellValueFactory(new Callback<CellDataFeatures<RelationManager,String>, ObservableValue<String>>() {
#Override
public ObservableValue<String> call(CellDataFeatures<RelationManager, String> data) {
return data.getValue() // the RelationManager
.getPatient().firstNameProperty();
}
});
If you are not using JavaFX properties, you can use the same fallback that the PropertyValueFactory uses, i.e.:
TableColumn<RelationManager, String> firstNameColumn = new TableColumn<>("First Name");
firstNameColumn.setCellValueFactory(new Callback<CellDataFeatures<RelationManager,String>, ObservableValue<String>>() {
#Override
public ObservableValue<String> call(CellDataFeatures<RelationManager, String> data) {
return new ReadOnlyStringWrapper(data.getValue().getPatient().getFirstName());
}
});
but note that this won't update if you change the name of the patient externally to the table.
However, none of this will work if the patient object associated with the relation manager is changed (the cell will still be observing the wrong firstNameProperty()). In that case you need an observable value that changes when either the "intermediate" patient property or the firstNameProperty change. JavaFX has a Bindings API with some select(...) methods that can do this: unfortunately in JavaFX 8 they spew out enormous amounts of warnings to the console if any of the objects along the way are null, which they will be in a TableView context. In this case I would recommend looking at the EasyBind framework, which will allow you to do something like
firstNameColumn.setCellValueFactory( data ->
EasyBind.select(data.getValue().patientProperty())
.selectObject(Patient::firstNameProperty));
(EasyBind requires JavaFX 8, so you if you get to use it, you also get to use lambda expressions and method references :).)
In either case, if you want the table to be editable, there's a little extra work to do for the editable cells in terms of wiring editing commits back to the appropriate call to set a property.

Unable to connect to cassandra using Hector

I am unable to access Casandra using Hector. Following is the code
import java.util.Arrays;
import java.util.List;
import me.prettyprint.cassandra.service.CassandraHostConfigurator;
import me.prettyprint.cassandra.service.ThriftCluster;
import me.prettyprint.cassandra.service.ThriftKsDef;
import me.prettyprint.hector.api.Cluster;
import me.prettyprint.hector.api.Keyspace;
import me.prettyprint.hector.api.ddl.ColumnFamilyDefinition;
import me.prettyprint.hector.api.ddl.KeyspaceDefinition;
import me.prettyprint.hector.api.factory.HFactory;
import me.prettyprint.hector.api.mutation.Mutator;
public class Hector {
public static void main (String[] args){
boolean cfExists = false;
Cluster cluster = HFactory.getOrCreateCluster("mycluster", new CassandraHostConfigurator("host:9160"));
Keyspace keyspace = HFactory.createKeyspace("Keyspace1", cluster);
// first check if the key space exists
KeyspaceDefinition keyspaceDetail = cluster.describeKeyspace("Keyspace1");
// if not, create one
if (keyspaceDetail == null) {
CassandraHostConfigurator cassandraHostConfigurator = new CassandraHostConfigurator("host:9160");
ThriftCluster cassandraCluster = new ThriftCluster("mycluster", cassandraHostConfigurator);
ColumnFamilyDefinition cfDef = HFactory.createColumnFamilyDefinition("Keyspace1", "base");
cassandraCluster.addKeyspace(new ThriftKsDef("Keyspace1", "org.apache.cassandra.locator.SimpleStrategy", 1,
Arrays.asList(cfDef)));
} else {
// even if the key space exists, we need to check if the column family exists
List<ColumnFamilyDefinition> columnFamilyDefinitions = keyspaceDetail.getCfDefs();
for (ColumnFamilyDefinition def : columnFamilyDefinitions) {
String columnFamilyName = def.getName();
if (columnFamilyName.equals("tcs_im"))
cfExists = true;
}
}
}
}
Encountering following error
log4j:WARN No appenders could be found for logger (me.prettyprint.cassandra.connection.CassandraHostRetryService).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.IllegalAccessError: tried to access class me.prettyprint.cassandra.service.JmxMonitor from class me.prettyprint.cassandra.connection.HConnectionManager
at me.prettyprint.cassandra.connection.HConnectionManager.(HConnectionManager.java:78)
at me.prettyprint.cassandra.service.AbstractCluster.(AbstractCluster.java:69)
at me.prettyprint.cassandra.service.AbstractCluster.(AbstractCluster.java:65)
at me.prettyprint.cassandra.service.ThriftCluster.(ThriftCluster.java:17)
at me.prettyprint.hector.api.factory.HFactory.createCluster(HFactory.java:176)
at me.prettyprint.hector.api.factory.HFactory.getOrCreateCluster(HFactory.java:155)
at com.im.tcs.Hector.main(Hector.java:20)
Please help as to why is it happening.
We use a CassandraConnection class as a convenience-class:
import me.prettyprint.cassandra.connection.DynamicLoadBalancingPolicy;
import me.prettyprint.cassandra.service.CassandraHostConfigurator;
import me.prettyprint.cassandra.service.ExhaustedPolicy;
import me.prettyprint.cassandra.service.OperationType;
import me.prettyprint.hector.api.Cluster;
import me.prettyprint.hector.api.HConsistencyLevel;
import me.prettyprint.hector.api.Keyspace;
import me.prettyprint.hector.api.factory.HFactory;
import java.util.HashMap;
import java.util.Map;
/**
* lazy connect
*/
final class CassandraConnection {
// Constants -----------------------------------------------------
private static final String HOSTS = "localhost";
private static final int PORT = "9160";
private static final String CLUSTER_NAME = "myCluster";
private static final int TIMEOUT = 500);
private static final String KEYSPACE = "Keyspace1";
private static final ConsistencyLevelPolicy CL_POLICY = new ConsistencyLevelPolicy();
// Attributes ----------------------------------------------------
private Cluster cluster;
private volatile Keyspace keyspace;
// Constructors --------------------------------------------------
CassandraConnection() {}
// Methods --------------------------------------------------------
Cluster getCluster() {
if (null == cluster) {
CassandraHostConfigurator config = new CassandraHostConfigurator();
config.setHosts(HOSTS);
config.setPort(PORT);
config.setUseThriftFramedTransport(true);
config.setUseSocketKeepalive(true);
config.setAutoDiscoverHosts(false);
// maxWorkerThreads provides the throttling for us. So hector can be let to grow freely...
config.setExhaustedPolicy(ExhaustedPolicy.WHEN_EXHAUSTED_GROW);
config.setMaxActive(1000); // hack since ExhaustedPolicy doesn't work
// suspend hosts if response is unacceptable for web response
config.setCassandraThriftSocketTimeout(TIMEOUT);
config.setUseHostTimeoutTracker(true);
config.setHostTimeoutCounter(3);
config.setLoadBalancingPolicy(new DynamicLoadBalancingPolicy());
cluster = HFactory.createCluster(CLUSTER_NAME, config);
}
return cluster;
}
Keyspace getKeyspace() {
if (null == keyspace) {
keyspace = HFactory.createKeyspace(KEYSPACE, getCluster(), CL_POLICY);
}
return keyspace;
}
private static class ConsistencyLevelPolicy implements me.prettyprint.hector.api.ConsistencyLevelPolicy {
#Override
public HConsistencyLevel get(final OperationType op) {
return HConsistencyLevel.ONE;
}
#Override
public HConsistencyLevel get(final OperationType op, final String cfName) {
return get(op);
}
}
}
Example of use:
private final CassandraConnection conn = new CassandraConnection();
SliceQuery<String, String, String> sliceQuery = HFactory.createSliceQuery(
conn.getKeyspace(), StringSerializer.get(), StringSerializer.get(), StringSerializer.get());
sliceQuery.setColumnFamily("myColumnFamily");
sliceQuery.setRange("", "", false, Integer.MAX_VALUE);
sliceQuery.setKey("myRowKey");
ColumnSlice<String, String> columnSlice = sliceQuery.execute().get();

How to configure json format when using jaxb annotations with jersey

I am using jersey to expose a service which uses jaxb annotated classes to configure the look of the json.
I am trying to include the type directive in each json element. I do this by providing a Provider as such:
import org.codehaus.jackson.JsonParser.Feature;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.map.ObjectMapper.DefaultTyping;
#Provider
#Produces(MediaType.APPLICATION_JSON)
public class CmsContextResolver implements ContextResolver<ObjectMapper> {
ObjectMapper mapper;
public CmsContextResolver() {
mapper = new ObjectMapper();
// #JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include =
// JsonTypeInfo.As.WRAPPER_OBJECT, property = "#type")
mapper.configure(Feature.INTERN_FIELD_NAMES, true);
mapper.enableDefaultTypingAsProperty(DefaultTyping.NON_FINAL, "#type");
}
#Override
public ObjectMapper getContext(Class<?> arg0) {
return mapper;
}
}
And this provider is definitely being picked up.
10 May 2011 3:53:18 PM com.sun.jersey.api.core.ScanningResourceConfig logClasses
INFO: Provider classes found:
class com.afrozaar.cms.service.CmsContextResolver
But it is making no difference. The format of the json is unaffected.
As far as I can tell the problem stems from the fact that jersey is not using jackson to serialize? or that jersey is ignoring my jackson configuration overrides...
I don't know why your code isn't working, but this is what I use:
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.ext.Provider;
import org.codehaus.jackson.jaxrs.JacksonJaxbJsonProvider;
#Provider
#Produces(MediaType.APPLICATION_JSON)
public class JsonProvider extends JacksonJaxbJsonProvider {
public JsonProvider() {
super();
setMapper( myConfiguredObjectMapper );
}

Resources