I'm working with hazelcast version 3.10. I am trying to work with map localKeySet() and the following happens:
If I work without including a MapStore, localKeySet() works
correctly: in each local node, KeySet () returns a reduced set of
keys.
If I add a MapStore in the map configuration, localKeySet() would
seem to not work correctly: in each local node, localKeySet() returns
all keys in the map.
I configured the map with this function:
private MapConfig mapConfigurationAux (Config config, String name, int backupCount, boolean statisticsEnabled, int mapStoreDelay,
MapStore implementationMapStore) {
MapConfig mapConfig = config.getMapConfig (name);
mapConfig.setBackupCount(backupCount);
mapConfig.setInMemoryFormat(InMemoryFormat.OBJECT);
mapConfig.setStatisticsEnabled(statisticsEnabled);
if (implementationMapStore! = null) {
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setEnabled(true);
mapStoreConfig.setImplementation(implementationMapStore);
mapStoreConfig.setWriteDelaySeconds(mapStoreDelay);
mapStoreConfig.setWriteBatchSize(100);
mapStoreConfig.setInitialLoadMode(InitialLoadMode.LAZY);
mapConfig.setMapStoreConfig(mapStoreConfig);
}
return mapConfig;
}
What I can be doing wrong?
I found the problem As expected, I had a Hazelcast configuration problem. In the interface that set the NetWorkConfig, it was setting 127.0.0.1 in all instances of the cluster.
NetworkConfig network = cfg.getNetworkConfig();
network.setPort(port).setPortAutoIncrement(true);
network.setPublicAddress(publicAddress);
network.getInterfaces().addInterface("127.0.0.1").setEnabled(true);
Related
I have a TypeConverter in which I'm using context.Mapper.Map() to map two subproperties.
The properties are the same Type and use the same (another) TypeConverter. However in one the properties I need to pass some IMappingOperationsOptions.
It looks like this (simplified):
public class MyTypeConverter : ITypeConverter<A, B>
{
public B Convert(A, B, ResolutionContext context)
{
var subProp1 = context.Mapper.Map<C>(B.SomeProp);
var subProp2 = context.Mapper.Map<C>(B.SomeOtherProp, ops => ops.Items["someOption"] = "someValue");
return new B
{
SubProp1 = subProp1,
SubProp2 = subProp2
};
}
}
This was working fine in AutoMapper 8.0.0 but I'm upgrading to AutoMapper 10.1.1 (last version with .NET framework support).
In this newer version of AutoMapper the overload method to pass IMappingOperationsOptions does not exist anymore.
I could (theoretically) solve this by injecting IMapper in the constructor of the TypeResolver and use that instead of the ResolutionContext's Mapper but that doesn't feel right.
At the moment I solved the issue by temporarily updating the ResolutionContext options, but that also doesn't really feel right.
var subProp1 = context.Mapper.Map<C>(B.SomeProp);
context.Options.Items["someOption"] = "someValue";
var subProp2 = context.Mapper.Map<C>(B.SomeOtherProp);
context.Options.Remove("someOption");
Casting ((IMapper)context.Mapper).Map() crashes so that's not an option either. Is there a more elegant way to achieve this?
I've compiled a docker image of GridGain Pro and run this.
with Java i do the following...
Create the following #Configuration file
#Configuration
#EnableCaching
public class CustomConfiguration extends CachingConfigurerSupport {
#Bean
#Override
public KeyGenerator keyGenerator() {
return (target, method, params) -> {
StringBuilder sb = new StringBuilder();
sb.append(target.getClass().getName());
sb.append(method.getName());
for (Object obj : params) {
sb.append("|");
sb.append(obj.toString());
}
return sb.toString();
};
}
#Bean("cacheManager")
public SpringCacheManager cacheManager(IgniteConfiguration igniteConfiguration){
try {
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setIgniteInstanceName("ignite");
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(new CacheConfiguration<>().setCacheMode(CacheMode.REPLICATED));
return springCacheManager;
}
catch (Exception ex){
}
return null;
}
#Bean
#Profile("!dev")
IgniteConfiguration igniteConfiguration() {
GridGainConfiguration gridGainConfiguration = new GridGainConfiguration();
gridGainConfiguration.setRollingUpdatesEnabled(true);
IgniteConfiguration igniteConfiguration = new IgniteConfiguration()
.setPluginConfigurations(gridGainConfiguration)
.setClientMode(true)
.setPeerClassLoadingEnabled(false)
.setIgniteInstanceName("MyIgnite");
DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration();
DataRegionConfiguration dataRegionConfiguration = new DataRegionConfiguration();
dataRegionConfiguration.setInitialSize(20 * 1024 * 1024);
dataRegionConfiguration.setMaxSize(40 * 1024 * 1024);
dataRegionConfiguration.setMetricsEnabled(true);
dataStorageConfiguration.setDefaultDataRegionConfiguration(dataRegionConfiguration);
igniteConfiguration.setDataStorageConfiguration(dataStorageConfiguration);
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder tcpDiscoveryVmIpFinder = new TcpDiscoveryVmIpFinder();
tcpDiscoveryVmIpFinder.setAddresses(Arrays.asList("192.168.99.100:47500..47502"));
tcpDiscoverySpi.setIpFinder(tcpDiscoveryVmIpFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
return igniteConfiguration;
}
}
Start spring and get the following error.
2018-04-18 12:27:29.277 WARN 12588 --- [ main] .GridEntDiscoveryNodeValidationProcessor : GridGain node cannot be in one cluster with Ignite node [locNodeAddrs=[server/0:0:0:0:0:0:0:1, server/10.29.96.164, server/127.0.0.1, /192.168.56.1, /192.168.99.1], rmtNodeAddrs=[172.17.0.1/0:0:0:0:0:0:0:1%lo, 192.168.99.100/10.0.2.15, 10.0.2.15/127.0.0.1, /172.17.0.1, /192.168.99.100]]
2018-04-18 12:27:29.283 ERROR 12588 --- [ main] o.a.i.internal.IgniteKernal%MyIgnite : Got exception while starting (will rollback startup routine).
I'm trying to use gridgain as a replacement for redis and use the #Cacheable annotation.
Does anyone have a working gridgain example?
What is causing the error above?
G.
1) okay seems the issue was not providing H2 as a dependency.
2) using GridGain professional instead of GridGain Enterprise.
G.
GridGain node cannot be in one cluster with Ignite node is pretty self-explanatory.
Either you have forgot to stop some local Apache Ignite from earlier experiments.
Or you have deliberately tried to make GridGain join an Ignite cluster.
Or better yet, there is an instance of Apache Ignite running somewhere in your local network, and you have set multicast discovery or other kind of too-broad discovery, so they're seeing each other.
Maybe gridgain-core.x.x.x.jar jar is miising from one of nodes' classpath. Check and add it if necessary.
I intend to use apache flink for read/write data into cassandra using flink. I was hoping to use flink-connector-cassandra, I don't find good documentation/examples for the connector.
Can you please point me to the right way for read and write data from cassandra using Apache Flink. I see only sink example which are purely for write ? Is apache flink meant for reading data too from cassandra similar to apache spark ?
I had the same question, and this is what I was looking for. I don't know if it is over simplified for what you need, but figured I should show it none the less.
ClusterBuilder cb = new ClusterBuilder() {
#Override
public Cluster buildCluster(Cluster.Builder builder) {
return builder.addContactPoint("urlToUse.com").withPort(9042).build();
}
};
CassandraInputFormat<Tuple2<String, String>> cassandraInputFormat = new CassandraInputFormat<>("SELECT * FROM example.cassandraconnectorexample", cb);
cassandraInputFormat.configure(null);
cassandraInputFormat.open(null);
Tuple2<String, String> testOutputTuple = new Tuple2<>();
cassandraInputFormat.nextRecord(testOutputTuple);
System.out.println("column1: " + testOutputTuple.f0);
System.out.println("column2: " + testOutputTuple.f1);
The way I figured this out was thanks to finding the code for the "CassandraInputFormat" class and seeing how it worked (http://www.javatips.net/api/flink-master/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/batch/connectors/cassandra/CassandraInputFormat.java). I honestly expected it to just be a format and not the full class of reading from Cassandra based on the name, and I have a feeling others might be thinking the same thing.
ClusterBuilder cb = new ClusterBuilder() {
#Override
public Cluster buildCluster(Cluster.Builder builder) {
return builder.addContactPoint("localhost").withPort(9042).build();
}
};
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
InputFormat inputFormat = new CassandraInputFormat<Tuple3<Integer, Integer, Integer>>("SELECT * FROM test.example;", cb);//, TypeInformation.of(Tuple3.class));
DataStreamSource t = env.createInput(inputFormat, TupleTypeInfo.of(new TypeHint<Tuple3<Integer, Integer,Integer>>() {}));
tableEnv.registerDataStream("t1",t);
Table t2 = tableEnv.sql("select * from t1");
t2.printSchema();
You can use RichFlatMapFunction to extend a class
class MongoMapper extends RichFlatMapFunction[JsonNode,JsonNode]{
var userCollection: MongoCollection[Document] = _
override def open(parameters: Configuration): Unit = {
// do something here like opening connection
val client: MongoClient = MongoClient("mongodb://localhost:10000")
userCollection = client.getDatabase("gp_stage").getCollection("users").withReadPreference(ReadPreference.secondaryPreferred())
super.open(parameters)
}
override def flatMap(event: JsonNode, out: Collector[JsonNode]): Unit = {
// Do something here per record and this function can make use of objects initialized via open
userCollection.find(Filters.eq("_id", somevalue)).limit(1).first().subscribe(
(result: Document) => {
// println(result)
},
(t: Throwable) =>{
println(t)
},
()=>{
out.collect(event)
}
)
}
}
}
Basically open function executes once per worker and flatmap executes it per record. The example is for mongo but can be similarly used for cassandra
In your case as I understand the first step of your pipeline is reading data from Cassandra rather than writing a RichFlatMapFunction you should write your own RichSourceFunction
As a reference you can have a look at simple implementation of WikipediaEditsSource.
Im using automapper to object conversion where source is table class and destination is property class.
I'm using .dml to connect database.
App type - Window
Using platform - VS-12 framework 4.5 , automapper version 4.2.1
Issues :- when convert single class object automapper successfully converted but when im using list then it return zero.
In Config class-
public static Initialize();
Mapper.CreateMap<Source, destination>().ReverseMap();
Mapper.CreateMap<List<Source>, List<destination>>().ReverseMap();
In code-
//It run successfully
Mapper.map(result, objdestination);
//It not run work and anot giving any exception
Mapper.map(listresult, listdestination);
Thanks in advance.
var config = new MapperConfiguration(cfg =>
{
cfg.CreateMap< Source, destination>().ReverseMap();
});
config.AssertConfigurationIsValid(); // check if configuration valid.
IMapper mapper = config.CreateMapper();
var appProduct = mapper.Map<List<destination>>(sourceObj);
I'm trying to use JDT SearchEngine to find references to a given object. But I'm getting a "NullPointerException" while invoking the "search" method of org.eclipse.jdt.core.search.SearchEngine.
Following is the error trace:
java.lang.NullPointerException at
org.eclipse.jdt.internal.core.search.BasicSearchEngine.findMatches(BasicSearchEngine.java:214)
at
org.eclipse.jdt.internal.core.search.BasicSearchEngine.search(BasicSearchEngine.java:515)
at
org.eclipse.jdt.core.search.SearchEngine.search(SearchEngine.java:582)
And following is the method I'm using to perform search:
private static void search(String elementName) { //elementName -> a method Name
try {
SearchPattern pattern = SearchPattern.createPattern(elementName, IJavaSearchConstants.METHOD,
IJavaSearchConstants.REFERENCES, SearchPattern.R_PATTERN_MATCH);
IJavaSearchScope scope = SearchEngine.createWorkspaceScope();
SearchRequestor requestor = new SearchRequestor() {
#Override
public void acceptSearchMatch(SearchMatch match) {
System.out.println("Element - " + match.getElement());
}
};
SearchEngine searchEngine = new SearchEngine();
SearchParticipant[] searchParticipants = new SearchParticipant[] { SearchEngine
.getDefaultSearchParticipant() };
searchEngine.search(pattern, searchParticipants, scope, requestor, null);
} catch (Exception e) {
e.printStackTrace();
}
}
Refer the "Variables" window of the following snapshot to check the values of the arguments passing to the "searchEngine.search()":
I think the the issue is because of the value of "scope" [Highlighted in 'BLACK' above].
Which means "SearchEngine.createWorkspaceScope()" doesn't return expected values in this case.
NOTE: Please note that this is a part of my program which runs as a stand-alone java program (not an eclipse plugin) using JDT APIs to parse a given source code (using JDT-AST).
Isn't it possible to use JDT SearchEngine in such case (non eclipse plugin program), or is this issue due to some other reason?
Really appreciate your answer on this.
No. You cannot use the search engine without openning a workspace. The reason is that the SearchEngine relies on the eclipse filesystem abstraction (IResource, IFile, IFolder, etc.). This is only available when the workspace is open.