Hazelcast map max-size configuration without eviction policy - hazelcast

In hazelcast Map configuration,if we set eviction-policy to None and Used max-idle-seconds,time-to-live-seconds like below ,
<map name="simpleMap">
<backup-count>0</backup-count>
<max-idle-seconds>360</max-idle-seconds> <time-to-live-seconds>30</time-to-live-seconds>
<eviction-policy>NONE</eviction-policy>
<max-size>3000</max-size>
<eviction-percentage>30</eviction-percentage>
<merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>
Can someone explain,In this case max-size will work or not?

Configuring max-size with no eviction policy is not a valid configuration. Please check the description here.
If you want max-size to work, set the to a value other than NONE.

Related

Making ServiceStack RedisSentinel use a RedisManagerPool instead of a PooledRedisClientManager

Using ServiceStack version 4.0.40.
I am trying get RedisSentinel to use the RedisManagerPool instead of the
PooledRedisClientManager so it will allow clients above the client pool size.
I see this in the docs to set this...
sentinel.RedisManagerFactory = (master,slaves) => new RedisManagerPool(master);
I'm not sure how to use this. Do I pass in the master host name? What if I don't know which is master because of a previous failover? I can't sentinel.start() to find out which is master because it will start with the PooledRedisClientManager, which isn't what I want.
Or, do I pass in the sentinel hosts? RedisManagerPool takes a list of hosts, I can pass in the sentinel hosts, but I cannot set it to sentinel.RedisManagerFactory as RedisManagerFactory is not convertible to RedisManagerPool.
I think I am missing something simple here. Any help appreciated.
UPDATE
As per mythz's comment below, this isn't available in version 4.0.40 of ServiceStack. But you can use;
sential.RedisManagerFactory.FactoryFn = (master, slaves) => new RedisManagerPool(master);
Thanks
This is literally the config you need to use to change RedisSentinel to use RedisManagerPool:
sentinel.RedisManagerFactory = (master,slaves) =>
new RedisManagerPool(master);
You don’t need to pass anything else, the master host argument uses the lambda argument.

What is the purpose of "EnableSubscriptionPartitioning" property in an Azure Service Bus Topic?

When it comes to partitioning in an Azure Service Bus Topic, there are two properties: EnablePartitioning and EnableSubscriptionPartitioning.
It is very clear to me what EnablePartitioning property does. Based on my understanding of this property, essentially when this property is set to true, the topic in question will be partitioned across multiple message brokers.
What I am not able to find is any concrete information on EnableSubscriptionPartitioning property. The documentation I looked at simply describes this property as:
Value that indicates whether partitioning is enabled or disabled.
Furthermore when I create a topic with this property set to true (and enable partitioning property set to false) a topic is created for me with 118784 MB in size (MaxSizeInMegabytes property). Here's the response XML I get when I fetch topic's properties.
<entry xml:base="https://namespace.servicebus.windows.net/$Resources/topics?api-version=2016-07">
<id>https://namespace.servicebus.windows.net/gauravtesttopic?api-version=2016-07</id>
<title type="text">gauravtesttopic</title>
<published>2017-08-18T02:00:12Z</published>
<updated>2017-08-18T02:00:18Z</updated>
<author><name>namespace</name></author>
<link rel="self" href="../gauravtesttopic?api-version=2016-07"/>
<content type="application/xml">
<TopicDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<DefaultMessageTimeToLive>P10675199DT2H48M5.4775807S</DefaultMessageTimeToLive>
<MaxSizeInMegabytes>118784</MaxSizeInMegabytes>
<RequiresDuplicateDetection>false</RequiresDuplicateDetection>
<DuplicateDetectionHistoryTimeWindow>PT10M</DuplicateDetectionHistoryTimeWindow>
<EnableBatchedOperations>true</EnableBatchedOperations>
<SizeInBytes>0</SizeInBytes>
<FilteringMessagesBeforePublishing>false</FilteringMessagesBeforePublishing>
<IsAnonymousAccessible>false</IsAnonymousAccessible>
<AuthorizationRules></AuthorizationRules>
<Status>Active</Status>
<CreatedAt>2017-08-18T02:00:11.5270915Z</CreatedAt>
<UpdatedAt>2017-08-18T02:00:18.087Z</UpdatedAt>
<AccessedAt>0001-01-01T00:00:00Z</AccessedAt>
<SupportOrdering>true</SupportOrdering>
<CountDetails xmlns:d2p1="http://schemas.microsoft.com/netservices/2011/06/servicebus">
<d2p1:ActiveMessageCount>0</d2p1:ActiveMessageCount>
<d2p1:DeadLetterMessageCount>0</d2p1:DeadLetterMessageCount>
<d2p1:ScheduledMessageCount>0</d2p1:ScheduledMessageCount>
<d2p1:TransferMessageCount>0</d2p1:TransferMessageCount>
<d2p1:TransferDeadLetterMessageCount>0</d2p1:TransferDeadLetterMessageCount>
</CountDetails>
<SubscriptionCount>0</SubscriptionCount>
<AutoDeleteOnIdle>P10675199DT2H48M5.4775807S</AutoDeleteOnIdle>
<EnablePartitioning>false</EnablePartitioning>
<IsExpress>false</IsExpress>
<EntityAvailabilityStatus>Available</EntityAvailabilityStatus>
<EnableSubscriptionPartitioning>true</EnableSubscriptionPartitioning>
<EnableExpress>false</EnableExpress>
</TopicDescription>
</content>
</entry>
The problem I run with this is when I try to update the topic, I get an error message from the service complaining about invalid size. Because the topic is not partitioned, the size should be one of the following: 1GB, 2GB, 3GB, 4GB or 5GB.
Any insights into this would be highly appreciated.

Hazelcast absolute expiration of items

I am using Hazelcast "3.6.3" on scala 2.11.8
I have written this code.
val config = new Config("mycluster")
config.getNetworkConfig.getJoin.getMultcastConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getMulticastConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getAwsConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getTcpIpConfig.setMembers(...)
config.getNetworkConfig.getJoin.getTcpIpConfig.setEnabled(true)
val hc = Hazelcast.newHazelcastInstance(config)
hc.getConfig.addMapConfig(new MapConfig()
.setName("foo")
.setBackupCount(1)
.setTimeToLiveSeconds(3600)
.setAsyncBackupCount(1)
.setInMemoryFormat(InMemoryFormat.BINARY)
.setMaxSizeConfig(new MaxSizeConfig(1, MaxSizePolicy.USED_HEAP_SIZE))
)
hc.putValue[(String, Int)]("foo", "1", ("foo", 10))
I notice that when 1 hour is over hazelcast does not remove the items from the cache. The items seem to be living forever in the cache.
I don't want sliding expiration. I want absolute expiration this means that after 1 hour the item has to be kicked out no matter how many times it was accessed during the hour.
I have done required googling and I think my code above is correct. But when I look at my server logs, I am pretty sure that nothing is removed from the cache.
Sorry I am not a scala guy. But can you explain what does hc.addTimeToLiveMapConfig does?
Normally you need to add the TTL config into Config object before starting Hazelcast.
I believe in your case, you are starting Hazelcast and only then updating the config with TTL. Please try with reverse order.
If you don't like to add this to the configuration, there is an overloaded map.put method that takes TTL as an input. This way you can specify TTL per entry.

Setting TTL/Record Expiry in hazelcast

Is there any way to set TTL per record in hazelcast DB?Preferably in Map or Ringbuffer.
I guess you're looking for that:
IMap::put(Key, Value, TTL, TimeUnit)
IMap:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#evicting-specific-entries
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#put(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#putAsync(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#putIfAbsent(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#putTransient(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#set(K,%20V,%20long,%20java.util.concurrent.TimeUnit)

Logging Spark Configuration Properties

I'm trying to log the properties for each Spark application that run in one Yarn cluster ( properties like spark.shuffle.compress, spark.reducer.maxMbInFlight, spark.executor.instances and so on ).
However i don't know if this information is logged anywhere. I know that we can access to the yarn logs through the "yarn" command but the properties I'm talking about are not store there.
Is there anyway to access to this kind of info?. The idea is to have a trace of all the applications that run in the cluster together with its properties to identify which ones have the most impact in their execution time.
You could log it yourself... use sc.getConf.toDebugString, sqlContext.getConf("") or sqlContext.getAllConfs.
scala> sqlContext.getConf("spark.sql.shuffle.partitions")
res129: String = 200
scala> sqlContext.getAllConfs
res130: scala.collection.immutable.Map[String,String] = Map(hive.server2.thrift.http.cookie.is.httponly -> true, dfs.namenode.resource.check.interval ....
scala> sc.getConf.toDebugString
res132: String =
spark.app.id=local-1449607289874
spark.app.name=Spark shell
spark.driver.host=10.5.10.153
Edit: However, I could not find the properties you specified among the 1200+ properties in sqlContext.getAllConfs :( Otherwise the documentation says:
The application web UI at http://:4040 lists Spark properties
in the “Environment” tab. This is a useful place to check to make sure
that your properties have been set correctly. Note that only values
explicitly specified through spark-defaults.conf, SparkConf, or the
command line will appear. For all other configuration properties, you
can assume the default value is used.

Resources