Is it possible to use a custom datastax session for Spring-Data?
Hi, I know Spring-Data for Cassandra uses datastax session internally. However I have a custom datastax session object (given by another service) that I would like Spring-Data to use instead of the one prewired. Assuming the versions of both datastax sessions are the same, is this possible?
Yes, it's possible.
Depending on your setup, there are a couple of approaches. Let me explain the two most common scenarios:
Direct usage of Template API
Session yourSession = …;
CqlTemplate cqlTemplate = new CqlTemplate(yourSession);
CassandraTemplate cassandraTemplate = new CassandraTemplate(yourSession);
Exposing the Session as #Bean
This one might require a bit more setup as configuration support expects usage of CassandraSessionFactoryBean and CassandraClusterFactoryBean.
Take a look at AbstractCassandraConfiguration to see what supporting beans (CassandraConverter, CassandraMappingContext) are configured to configure Spring Data's Cassandra support.
#Configuration
class MyCassandraConfig {
private final Session mySession;
public MyCassandraConfig(Session mySession) {
this.mySession = mySession;
}
#Bean
public CassandraConverter cassandraConverter() {
MappingCassandraConverter mappingCassandraConverter = new MappingCassandraConverter(cassandraMapping());
mappingCassandraConverter.setCustomConversions(customConversions());
return mappingCassandraConverter;
}
#Bean
public CassandraMappingContext cassandraMapping() {
Cluster cluster = mySession.getCluster();
String keyspace = mySession.getLoggedKeyspace();
CassandraMappingContext mappingContext = new CassandraMappingContext(
new SimpleUserTypeResolver(cluster, keyspace), new SimpleTupleTypeFactory(cluster));
CustomConversions customConversions = customConversions();
mappingContext.setCustomConversions(customConversions);
mappingContext.setSimpleTypeHolder(customConversions.getSimpleTypeHolder());
return mappingContext;
}
#Bean
public CustomConversions customConversions() {
return new CassandraCustomConversions(Collections.emptyList());
}
#Bean
public CassandraTemplate cassandraTemplate() {
return new CassandraTemplate(mySession, cassandraConverter());
}
}
Related
How do we customize the Jackson ObjectMapper used by WebFlux OutboundGateway? The normal customization done via Jackson2ObjectMapperBuilder or Jackson2ObjectMapperBuilderCustomizer is NOT respected.
Without this customization, LocalDate is serialized as SerializationFeature.WRITE_DATES_AS_TIMESTAMPS. Sample output - [2022-10-20] and there is NO way to customize the format
I assume you really talk about Spring Boot auto-configuration which is applied to the WebFlux instance. Consider to use an overloaded WebFlux.outboundGateway(String uri, WebClient webClient) to be able to auto-wire a WebClient.Builder which might be already configured with the mentioned customized ObjectMapper.
Registering a bean of type com.fasterxml.jackson.databind.module.SimpleModule will automatically be used by the pre-configured ObjectMapper bean. In SimpleModule, it is possible to register custom serialization and deserialization specifications.
To put that into code, a very simple solution would be the following:
#Bean
public SimpleModule odtModule() {
SimpleModule module = new SimpleModule();
JsonSerializer<LocalDate> serializer = new JsonSerializer<>() {
#Override
public void serialize(LocalDate odt, JsonGenerator jgen, SerializerProvider provider) throws IOException {
String formatted = odt.format(DateTimeFormatter.ISO_LOCAL_DATE);
jgen.writeString(formatted);
}
};
JsonDeserializer<LocalDate> deserializer = new JsonDeserializer<>() {
#Override
public LocalDate deserialize(JsonParser jsonParser, DeserializationContext deserializationContext) throws IOException {
return LocalDate.parse(jsonParser.getValueAsString());
}
};
module.addSerializer(LocalDate.class, serializer);
module.addDeserializer(LocalDate.class, deserializer);
return module;
}
Note that using lambdas for the implementations has sometimes resulted in weird behaviors for me, so I tend not to do that.
I just configured authentication in IgniteDB ( a specific server, not a localhost )
https://apacheignite.readme.io/docs/advanced-security
However I encountered some issue while trying to connect. Where should I provide the credential?
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
String ipList = appConfig.getIgniteIPAddressList();
List<String> addressList= Arrays.asList(ipList.split(";"));
ipFinder.setAddresses(addressList);
spi.setIpFinder(ipFinder);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("IgnitePod");
cfg.setClientMode(true);
cfg.setDiscoverySpi(spi);
Ignite ignite = Ignition.start(cfg);
Anybody has idea on implementing it?
https://apacheignite.readme.io/docs/advanced-security
Describes how to configure the authentication via username and password for THIN connections only (JDBC, ODBC).
You can create users using SQL commands like next:
https://apacheignite-sql.readme.io/docs/create-user
You can provide credentials to thin client connection string using its properties:
https://apacheignite-sql.readme.io/docs/connection-string-and-dsn#section-supported-arguments
https://apacheignite-sql.readme.io/docs/jdbc-driver#section-additional-connection-string-examples
Please also check that you have Ignite persistence configured.
As Andrei notes, Ignite only authenticates thin clients by default, and even then only when persistence is enabled. If you need to have thick-clients authenticate also, you can do this using a plugin. Third-party, commercial solutions also exist.
Apache Ignite does not provide these kinds of security capabilities with its open-source version. One can either implement it on your own or use commercial Gridgain distribution.
Here are the steps to implement a custom security plugin.
One would need to implement GridSecurityProcessor which would be used to authenticate the joining node.
In GridSecurityProcessor, you would have to implement authenticateNode() api as follows
public SecurityContext authenticateNode(ClusterNode node, SecurityCredentials cred) throws IgniteCheckedException {
SecurityCredentials userSecurityCredentials;
if (securityPluginConfiguration != null) {
if ((userSecurityCredentials = securityPluginConfiguration.getSecurityCredentials()) != null) {
return userSecurityCredentials.equals(cred) ? new SecurityContextImpl() : null;
}
if (cred == null && userSecurityCredentials == null) {
return new SecurityContextImpl();
}
}
if (cred == null)
return new SecurityContextImpl();
return null;
}
Also, you would need to extend TcpDiscoverySpi to pass the user credentials during initLocalNode() as follows
#Override
protected void initLocalNode(int srvPort, boolean addExtAddrAttr) {
try {
super.initLocalNode(srvPort, addExtAddrAttr);
this.setSecurityCredentials();
} catch (Exception e) {
e.printStackTrace();
}
}
private void setSecurityCredentials() {
if (securityCredentials != null) {
Map<String,Object> attributes = new HashMap<>(locNode.getAttributes());
attributes.put(IgniteNodeAttributes.ATTR_SECURITY_CREDENTIALS, securityCredentials);
this.locNode.setAttributes(attributes);
}
}
You can follow the link given below to get detailed steps that can be followed to write a custom security plugin and its usage.
https://www.bugdbug.com/post/how-to-secure-apache-ignite-cluster
Was able to solve my own problem by creating my own CustomTCPDiscoveryAPI.
First, create this class :
import org.apache.ignite.IgniteException;
import org.apache.ignite.cluster.ClusterNode;
import org.apache.ignite.internal.IgniteNodeAttributes;
import org.apache.ignite.internal.processors.security.SecurityContext;
import org.apache.ignite.lang.IgniteProductVersion;
import org.apache.ignite.plugin.security.SecurityCredentials;
import org.apache.ignite.spi.discovery.DiscoverySpiNodeAuthenticator;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import java.util.Map;
public class CustomTcpDiscoverySpi extends TcpDiscoverySpi implements DiscoverySpiNodeAuthenticator {
SecurityCredentials securityCredentials;
public CustomTcpDiscoverySpi(final SecurityCredentials securityCredentials) {
this.securityCredentials = securityCredentials;
this.setAuthenticator(this);
}
#Override
public SecurityContext authenticateNode(ClusterNode clusterNode, SecurityCredentials securityCredentials) throws IgniteException {
return null;
}
#Override
public boolean isGlobalNodeAuthentication() {
return true;
}
#Override
public void setNodeAttributes(final Map<String, Object> attrs, final IgniteProductVersion ver) {
attrs.put(IgniteNodeAttributes.ATTR_SECURITY_CREDENTIALS, this.securityCredentials);
super.setNodeAttributes(attrs, ver);
}
}
And then, use it like below :
SecurityCredentials cred = new SecurityCredentials();
cred.setLogin(appConfig.getIgniteUser());
cred.setPassword(appConfig.getIgnitePassword());
CustomTcpDiscoverySpi spi = new CustomTcpDiscoverySpi(cred);
//TcpDiscoverySpi spi = new TcpDiscoverySpi(); - > removed to use the CustomTCPDiscovery
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
String ipList = appConfig.getIgniteIPAddressList();
List<String> addressList= Arrays.asList(ipList.split(";"));
ipFinder.setAddresses(addressList);
spi.setIpFinder(ipFinder);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("IgnitePod");
cfg.setClientMode(true);
cfg.setAuthenticationEnabled(true);
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
// Enabling the persistence.
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
// Applying settings.
// tests
cfg.setDataStorageConfiguration(storageCfg);
cfg.setDiscoverySpi(spi);
Ignite ignite = Ignition.start(cfg);
Hope this helps other people who stuck with the same problem.
The only option for peer-authenticating server nodes which is available in vanilla Apache Ignite is SSL+certificates.
Storm Topology reads data from kafka and write into cassandra tables
In Storm i am creating cassandra cluster connection and session in prepare method.
cassandraCluster = Cluster.builder().withoutJMXReporting().withoutMetrics()
.addContactPoints(nodes)
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withReconnectionPolicy(new ExponentialReconnectionPolicy(100L,
TimeUnit.MINUTES.toMillis(5)))
.withLoadBalancingPolicy(
new TokenAwarePolicy(new RoundRobinPolicy()))
.build();
session = cassandraCluster.connect(keyspace);
In execute method i can process the tuple and save it in cassandra table
Suppose if i want to write data from single tuple into multiple table
Writing separate bolt for each table will be good choice. But i have to create cluster connection and session each table in each bolt.
But in this link single connection per cluster will be a good idea for performance
http://www.datastax.com/dev/blog/4-simple-rules-when-using-the-datastax-drivers-for-cassandra
Did any of you have any idea on creating cluster connection in one bolt and use this connection in other bolt?
It depends on how storm allocates the bolts and spouts to the workers. You can't assume that you can can share connections between bolts because they might be running in different workers (read: JVMs) or on different nodes entirely.
See my answer here: Mongo connection pooling for Storm topology
Might look something like this pseudocode:
public class CassandraBolt extends BaseRichBolt {
private static final long serialVersionUID = 1L;
private static Logger LOG = LoggerFactory.getLogger(CassandraBolt.class);
OutputCollector _collector;
// whatever your cassandra session is
// has to be transient because session is not serializable
protected transient CassandraSession _session;
#SuppressWarnings("rawtypes")
#Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
_collector = collector;
// maybe get properties from stormConf instead of hard coding them
cassandraCluster = Cluster.builder().withoutJMXReporting().withoutMetrics()
.addContactPoints(nodes)
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withReconnectionPolicy(new ExponentialReconnectionPolicy(100L,
TimeUnit.MINUTES.toMillis(5)))
.withLoadBalancingPolicy(
new TokenAwarePolicy(new RoundRobinPolicy()))
.build();
_session = cassandraCluster.connect(keyspace);
}
#Override
public void execute(Tuple input) {
try {
// use _session to talk to cassandra
} catch (Exception e) {
LOG.error("CassandraBolt error", e);
_collector.reportError(e);
}
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
// TODO Auto-generated method stub
}
}
I try to realize the following workflow with Spring Integration:
1) Poll REST API
2) store the POJO in Cassandra cluster
It's my first try with Spring Integration, so I'm still a bit overwhelmed about the mass of information from the reference. After some research, I could make the following work.
1) Poll REST API
2) Transform mapped POJO JSON result into a string
3) save string into file
Here's the code:
#Configuration
public class ConsulIntegrationConfig {
#InboundChannelAdapter(value = "consulHttp", poller = #Poller(maxMessagesPerPoll = "1", fixedDelay = "1000"))
public String consulAgentPoller() {
return "";
}
#Bean
public MessageChannel consulHttp() {
return MessageChannels.direct("consulHttp").get();
}
#Bean
#ServiceActivator(inputChannel = "consulHttp")
MessageHandler consulAgentHandler() {
final HttpRequestExecutingMessageHandler handler =
new HttpRequestExecutingMessageHandler("http://localhost:8500/v1/agent/self");
handler.setExpectedResponseType(AgentSelfResult.class);
handler.setOutputChannelName("consulAgentSelfChannel");
LOG.info("Created bean'consulAgentHandler'");
return handler;
}
#Bean
public MessageChannel consulAgentSelfChannel() {
return MessageChannels.direct("consulAgentSelfChannel").get();
}
#Bean
public MessageChannel consulAgentSelfFileChannel() {
return MessageChannels.direct("consulAgentSelfFileChannel").get();
}
#Bean
#ServiceActivator(inputChannel = "consulAgentSelfFileChannel")
MessageHandler consulAgentFileHandler() {
final Expression directoryExpression = new SpelExpressionParser().parseExpression("'./'");
final FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression);
handler.setFileNameGenerator(message -> "../../agent_self.txt");
handler.setFileExistsMode(FileExistsMode.APPEND);
handler.setCharset("UTF-8");
handler.setExpectReply(false);
return handler;
}
}
#Component
public final class ConsulAgentTransformer {
#Transformer(inputChannel = "consulAgentSelfChannel", outputChannel = "consulAgentSelfFileChannel")
public String transform(final AgentSelfResult json) throws IOException {
final String result = new StringBuilder(json.toString()).append("\n").toString();
return result;
}
This works fine!
But now, instead of writing the object to a file, I want to store it in a Cassandra cluster with spring-data-cassandra. For that, I commented out the file handler in the config file, return the POJO in transformer and created the following, :
#MessagingGateway(name = "consulCassandraGateway", defaultRequestChannel = "consulAgentSelfFileChannel")
public interface CassandraStorageService {
#Gateway(requestChannel="consulAgentSelfFileChannel")
void store(AgentSelfResult agentSelfResult);
}
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {} in Cassandra cluster...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
But this seems to be a wrong approach, the service method is never triggered.
So my question is, what would be a correct approach for my usecase? Do I have to implement the MessageHandler interface in my service component, and use a #ServiceActivator in my config. Or is there something missing in my current "gateway-approach"?? Or maybe there is another solution, that I'm not able to see..
Like mentioned before, I'm new to SI, so this may be a stupid question...
Nevertheless, thanks a lot in advance!
It's not clear how you are wiring in your CassandraStorageService bean.
The Spring Integration Cassandra Extension Project has a message-handler implementation.
The Cassandra Sink in spring-cloud-stream-modules uses it with Java configuration so you can use that as an example.
So I finally made it work. All I needed to do was
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#ServiceActivator(inputChannel="consulAgentSelfFileChannel")
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {}...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
The CassandraMessageHandler and the spring-cloud-streaming seemed to be a to big overhead to my use case, and I didn't really understand yet... And with this solution, I keep control over what happens in my spring component.
I have developed own idGenerator based on Hazelcast IdGenerator class (with storing each last_used_id into db). Now I want to run hazelcast cluster as a single java application and my web-application as other app (web-application restart shouldn't move id values to next block). I move MyIdGeneratorProxy and MyIdGeneratorService to new application, run it, run web-application as a hazelcast-client and get
IllegalArgumentException: No factory registered for service: ecs:impl:idGeneratorService
It was okay when client and server were the same application.
It seems it's unable to process without some clientProxy. I have compared IdGeneratorProxy and ClientIdGeneratorProxy and it looks the same. What is the idea? How to write client proxy for services? I have found no documentation yet. Is direction of investigations correct? I thought it is possible to divide hazelcast inner services (like a id generator service) and my business-processes. Should I store custom ClientProxy (for custom spi) in my web-application?
This is a demo how to create a client proxy, the missing part CustomClientProxy function call, is quit complicated(more like a server proxy,here is called ReadRequest, the server is called Operation), you can find a how AtomicLong implement.For every client proxy method you have to make a request.
#Test
public void client() throws InterruptedException, IOException
{
ClientConfig cfg = new XmlClientConfigBuilder("hazelcast-client.xml").build();
ServiceConfig serviceConfig = new ServiceConfig();
serviceConfig.setName(ConnectorService.NAME)
.setClassName(ConnectorService.class.getCanonicalName())
.setEnabled(true);
ProxyFactoryConfig proxyFactoryConfig = new ProxyFactoryConfig();
proxyFactoryConfig.setService(ConnectorService.NAME);
proxyFactoryConfig.setClassName(CustomProxyFactory.class.getName());
cfg.addProxyFactoryConfig(proxyFactoryConfig);
HazelcastInstance hz = HazelcastClient.newHazelcastClient(cfg);
Thread.sleep(1000);
for (int i = 0; i < 10; i++)
{
Connector c = hz.getDistributedObject(ConnectorService.NAME, "Connector:" + ThreadLocalRandom.current()
.nextInt(10000));
System.out.println(c.snapshot());
}
}
private static class CustomProxyFactory implements ClientProxyFactory
{
#Override
public ClientProxy create(String id)
{
return new CustomClientProxy(ConnectorService.NAME, id);
}
}
private static class CustomClientProxy extends ClientProxy implements Connector
{
protected CustomClientProxy(String serviceName, String objectName)
{
super(serviceName, objectName);
}
#Override
public ConnectorState snapshot()
{
return null;
}
#Override
public void loadState(ConnectorState state)
{
}
#Override
public boolean reconnect(HostNode node)
{
return false;
}
#Override
public boolean connect()
{
return false;
}
}
EDIT
In hazelcast the IdGenerate is implemented as a wrapper for AtomicLong, you should implement you IdGenerate by you own, instead of extend IdGenerate.
So you have to implement these(more like a todo list XD):
API
interface MyIdGenerate
Server
MyIdGenerateService
MyIdGenerateProxy
MyIdGenerateXXXOperation
Client
ClientMyIdGenerateFactory
ClientMyIdGenerateProxy
MyIdGenerateXXXRequest
I also made a sequence(same as IdGenerate) here, this is backed by zookeeper or redis,also it's easy to add a db backend,too.I will integrate to hazelcast if I got time.