I have a counter in database that needs to update to maintain the total number of "downloads". This counter will be updated every time the user downloads a file. The counter is created based on file name. We create a new entry in database if the name is not there with counter value as 1, else update it.
As per the deployment we have two instances of the same applications running.
The problem I am facing is how to prevent the update in case one thread is already updating/creating it.
Thread 1 :
currentCounter = 1
updateOperation = 1 + 1 = 2
Thread 2 (Same Time) :
currentCounter = 1
updateOperation = 1 + 1 = 2
Expected updateOperation = 2 + 1 = 3
This will be a even bigger problem when I have two instances running.
I suggest to use event registration entity
#Entity
public class EventRegistration {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id;
String eventName;
LocalDateTime happenedAt;
boolean handled;
#Version
Long version;
public EventRegistration(String eventName) {
this.eventName = eventName;
happenedAt = LocalDateTime.now();
handled = false;
}
public void setAsHandled() {
handled = true;
}
// getters, equals, hascode, toString etc...
}
You can create new record of EventRegistration for every event occuarence. Then update counter of event by timer adding number of unhandled EventRegistrations and setting handled = true or even delete handled reconds
#Autowired
EventRegistrationRepository repository;
#Transactional
public void updateEventCounter(String eventName) {
List<EventRegistration> registrations = repository.getAllByEventNameAndHandled(eventName, false);
//update event counter
registrations.forEach(EventRegistration::setAsHandled);
repository.save(registrations);
}
Related
I am using Spring Boot and H2 db. I have a Product entity and I want my application to be able to remove the product from the database, but my requirement is this: first set the active flag to false ( then this row will not be taken into account during fetching ) and after a specific period of time completely remove this row from db.
#Entity
#Table(name = "products")
public class Product {
#Id
#GeneratedValue(generator = "inc")
#GenericGenerator(name = "inc", strategy = "increment")
private int id;
private boolean active = true;
// getters & setters
}
And my method from the service layer responsible for setting the active flag to false and later complete deletion (I have nothing that does the second part of my requirement - complete deletion after specific period of time)
#Transactional
public void deleteProduct(int id) {
var target = repository.findProductById(id)
.orElseThrow(() -> new IllegalArgumentException("No product with given id"));
target.setActive(false);
// what should I add here to remove the target after a specific time?
}
EDIT
OK, I solved my problem:
#Transactional
public void deleteProduct(int id) {
var target = repository.findProductByIdAndActiveTrue(id)
.orElseThrow(() -> new IllegalArgumentException("No product with gicen id"));
target.setActive(false);
// complete removal after 150 seconds
new Thread(() -> {
try {
Thread.sleep(150000);
repository.deleteById(id);
} catch (Exception e) {
logger.error("Error removing the product");
}
}).start();
}
But now my question is if this is a safe solution as the user may start too many threads now so I think there is a better solution to my problem (safer in terms of multithreading)
I am not an expert but i think what you trying to achieve is bad practice.
I believe you should do a scheduling, for example ones per day.
You should update in db the value active. Set a schedule that will check the entries each time and if they have an active value of false then delete. Something like this:
public void deleteProduct(int id) {
//update value to false
repository.updateProductValue(id,false);
}
and your scheduling method:
#Scheduled(fixedRate = 150000)
public void deleteNonActiveProducts() {
List<Product> products = repository.findAllByFalseValue();
products.forEach(product -> repository.deleteById(product.getId());
}
With this what you are doing is that every 150000 milliseconds you repeat that task and each execution of this task is independent and non parallel.
Hope is useful to you.
I have this next class:
#Service
public class BusinessService {
#Autowired
private RedisService redisService;
private void count() {
String redisKey = "MyKey";
AtomicInteger counter = null;
if (!redisService.isExist(redisKey))
counter = new AtomicInteger(0);
else
counter = redisService.get(redisKey, AtomicInteger.class);
try {
counter.incrementAndGet();
redisService.set(redisKey, counter, false);
logger.info(String.format("Counter incremented by one. Current counter = %s", counter.get()));
} catch (JsonProcessingException e) {
logger.severe(String.format("Failed to increment counter."));
}
}
// Remaining code
}
and this this my RedisService.java class
#Service
public class RedisService {
private Logger logger = LoggerFactory.getLogger(RedisService.class);
#Autowired
private RedisConfig redisConfig;
#PostConstruct
public void postConstruct() {
try {
String redisURL = redisConfig.getUrl();
logger.info("Connecting to Redis at " + redisURL);
syncCommands = RedisClient.create(redisURL).connect().sync();
} catch (Exception e) {
logger.error("Exception connecting to Redis: " + e.getMessage(), e);
}
}
public boolean isExist(String redisKey) {
return syncCommands.exists(new String[] { redisKey }) == 1 ? true : false;
}
public <T extends Serializable> void set(String key, T object, boolean convertObjectToJson) throws JsonProcessingException {
if (convertObjectToJson)
syncCommands.set(key, writeValueAsString(object));
else
syncCommands.set(key, String.valueOf(object));
}
// Remaining code
}
and this is my test class
#Mock
private RedisService redisService;
#Spy
#InjectMocks
BusinessService businessService = new BusinessService();
#Before
public void setup() {
MockitoAnnotations.initMocks(this);
}
#Test
public void myTest() throws Exception {
for (int i = 0; i < 50; i++)
Whitebox.invokeMethod(businessService, "count");
// Remaining code
}
my problem is the counter always equals to one in logs when running tests
Counter incremented by one. Current counter = 1(printed 50 times)
and it should print:
Counter incremented by one. Current counter = 1
Counter incremented by one. Current counter = 2
...
...
Counter incremented by one. Current counter = 50
this means the Redis mock always passed as a new instance to BusinessService in each method call inside each loop, so how I can force this behavior to become only one instance used always for Redis inside the test method ??
Note: Above code is just a sample to explain my problem, but it's not a complete code.
Your conclusion that a new RedisService is somehow created in each iteration is wrong.
The problem is that it is a mock object for which you haven’t set any behaviours, so it responds with default values for each method call (null for objects, false for bools, 0 for ints etc).
You need to use Mockito.when to set behaviour on your mocks.
There is some additional complexity caused by the fact that:
you run the loop multiple times, and behaviour of the mocks differ between first and subsequent iterations
you create cached object in method under test. I used doAnswer to capture it.
You need to use doAnswer().when() instead of when().thenAnswer as set method returns void
and finally, atomicInt variable is modified from within the lambda. I made it a field of the class.
As the atomicInt is modified each time, I again used thenAnswer instead of thenReturn for get method.
class BusinessServiceTest {
#Mock
private RedisService redisService;
#InjectMocks
BusinessService businessService = new BusinessService();
AtomicInteger atomicInt = null;
#BeforeEach
public void setup() {
MockitoAnnotations.initMocks(this);
}
#Test
public void myTest() throws Exception {
// given
Mockito.when(redisService.isExist("MyKey"))
.thenReturn(false)
.thenReturn(true);
Mockito.doAnswer((Answer<Void>) invocation -> {
atomicInt = invocation.getArgument(1);
return null;
}).when(redisService).set(eq("MyKey"), any(AtomicInteger.class), eq(false));
Mockito.when(redisService.get("MyKey", AtomicInteger.class))
.thenAnswer(invocation -> atomicInt);
// when
for (int i = 0; i < 50; i++) {
Whitebox.invokeMethod(businessService, "count");
}
// Remaining code
}
}
Having said that, I still find your code questionable.
You store AtomicInteger in Redis cache (by serializing it to String). This class is designed to be used by multiple threads in a process, and the threads using it the same counter need to share the same instance. By serializing it and deserializing on get, you are getting multiple instances of the (conceptually) same counter, which, to my eyes, looks like a bug.
smaller issue: You shouldn't normally test private methods
2 small ones: there is no need to instantiate the field annotated with #InjectMocks. You don't need #Spy as well.
I understand that Near Caches are not guaranteed to be synchronized real-time when the value is updated elsewhere on some other node.
However I do expect it to be in sync with the EntryUpdatedListener that is on the same node and therefore the same process - or am I missing something?
Sequence of events:
Cluster of 1 node modifies the same key/value, flipping a value from X to Y and back to X on an interval every X seconds.
A client connects to this cluster node and adds an EntryUpdatedListener to observe the flipping value.
Client receives the EntryUpdatedEvent and prints the value given - as expected, it gives the value recently set.
Client immediately does a map.get for the same key (which should hit the near cache), and it prints a STALE value.
I find this strange - it means that two "channels" within the same client process are showing inconsistent versions of data. I would only expect this between different processes.
Below is my reproducer code:
public class ClusterTest {
private static final int OLD_VALUE = 10000;
private static final int NEW_VALUE = 88888;
private static final int KEY = 5;
private static final int NUMBER_OF_ENTRIES = 10;
public static void main(String[] args) throws Exception {
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
IMap map = instance.getMap("test");
for (int i = 0; i < NUMBER_OF_ENTRIES; i++) {
map.put(i, 0);
}
System.out.println("Size of map = " + map.size());
boolean flag = false;
while(true) {
int value = flag ? OLD_VALUE : NEW_VALUE;
flag = !flag;
map.put(KEY, value);
System.out.println("Set a value of [" + value + "]: ");
Thread.sleep(1000);
}
}
}
public class ClientTest {
public static void main(String[] args) throws InterruptedException {
HazelcastInstance instance = HazelcastClient.newHazelcastClient(new ClientConfig().addNearCacheConfig(new NearCacheConfig("test")));
IMap map = instance.getMap("test");
System.out.println("Size of map = " + map.size());
map.addEntryListener(new MyEntryListener(instance), true);
new CountDownLatch(1).await();
}
static class MyEntryListener
implements EntryAddedListener,
EntryUpdatedListener,
EntryRemovedListener {
private HazelcastInstance instance;
public MyEntryListener(HazelcastInstance instance) {
this.instance = instance;
}
#Override
public void entryAdded(EntryEvent event) {
System.out.println("Entry Added:" + event);
}
#Override
public void entryRemoved(EntryEvent event) {
System.out.println("Entry Removed:" + event);
}
#Override
public void entryUpdated(EntryEvent event) {
Object o = instance.getMap("test").get(event.getKey());
boolean equals = o.equals(event.getValue());
String s = "Event matches what has been fetched = " + equals;
if (!equals) {
s += ", EntryEvent value has delivered: " + (event.getValue()) + ", and an explicit GET has delivered:" + o;
}
System.out.println(s);
}
}
}
The output from the client:
INFO: hz.client_0 [dev] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is CLIENT_CONNECTED
Jun 20, 2019 4:58:15 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_0 [dev] [3.11.1] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Size of map = 10
Event matches what has been fetched = true
Event matches what has been fetched = false, EntryEvent value has delivered: 88888, and an explicit GET has delivered:10000
Event matches what has been fetched = true
Event matches what has been fetched = true
Event matches what has been fetched = false, EntryEvent value has delivered: 10000, and an explicit GET has delivered:88888
Near Cache has Eventual Consistency guarantee, while Listeners work in a fire & forget fashion. That's why there are two different mechanisms for both. Also, batching for near cache events reduces the network traffic and keeps the eventing system less busy (this helps when there are too many invalidations or clients), as a tradeoff it may increase the delay of individual invalidations. If you are confident that your system can handle each invalidation event, you can disable batching.
You need to configure the property on member side as events are generated on cluster members and sent to clients.
In my application, I'm trying to process data in IMap, the scenario is as follows:
application recieves request (REST for example) with set of keys to be processed
application processes entries with given key and returns result - map where key is original key of the entry and result is calculated
for this scenario IMap.executeOnKeys is almost perfect, with one problem - the entry is locked while being processed - and really it hurts thruput. The IMap is populated on startup and never modified.
Is it possible to process entries without locking them? If possible without sending entries to another node and without causing network overhead (sending 1000 tasks to single node in for-loop)
Here is reference implementation to demonstrate what I'm trying to achieve:
public class Main {
public static void main(String[] args) throws Exception {
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
IMap<String, String> map = instance.getMap("the-map");
// populated once on startup, never modified
for (int i = 1; i <= 10; i++) {
map.put("key-" + i, "value-" + i);
}
Set<String> keys = new HashSet<>();
keys.add("key-1"); // every requst may have different key set, they may overlap
System.out.println(" ---- processing ----");
ForkJoinPool pool = new ForkJoinPool();
// to simulate parallel requests on the same entry
pool.execute(() -> map.executeOnKeys(keys, new MyEntryProcessor("first")));
pool.execute(() -> map.executeOnKeys(keys, new MyEntryProcessor("second")));
System.out.println(" ---- pool is waiting ----");
pool.shutdown();
pool.awaitTermination(5, TimeUnit.MINUTES);
System.out.println(" ------ DONE -------");
}
static class MyEntryProcessor implements EntryProcessor<String, String> {
private String name;
MyEntryProcessor(String name) {
this.name = name;
}
#Override
public Object process(Map.Entry<String, String> entry) {
System.out.println(name + " is processing " + entry);
return calculate(entry); // may take some time, doesn't modify entry
}
#Override
public EntryBackupProcessor<String, String> getBackupProcessor() {
return null;
}
}
}
Thanks in advance
In executeOnKeys the entries are not locked. Maybe you mean that the processing happens on partitionThreads, so that there may be no other processing for the particular key? Anyhow, here's the solution:
Your EntryProcessor should implement:
Offloadable interface -> this means that the partition-thread will be used only for reading the value. The calculation will be done in the offloading thread-pool.
ReadOnly interface -> in this case the EP won't hop on the partition-thread again to save the modification you might have done in the entry. Since your EP does not modify entries, this will increase the performance.
I have reviewed and implemented / tested all the messaging options with ServiceStack that I know of (and I've searched on and off for a long time). The two are Pub/Sub and RedisMQ. Both of these have limitations that I needed to go beyond. I have already done this and have a solution that works perfectly for my system. The purpose of this posting / question is to see if I missed a better way, and a check on if my solution is really thread-safe. So far it is working well and I think it is good.
What I did was create an "exchange" class, and then a "pub" class and a "sub" class. My need was to have an arbitrary number of publishers, in any number of threads, publish to an arbitrary number of subscribers, in any number of threads. The only restriction is that my publisher and my subscriber can not be in the same thread, as this causes deadlock. This is by design, so for me not a limitation, as I want blocking subscribers (this could actually be changed with one line of code, but is not my application need and would actually be a negative). Also note that the subscribers can be to unique publishers or any number of subscribers to the same publisher (fan-out). This was the MQ limitation I needed to resolve. The Pub/Sub limitations were even greater, but let's not digress into why they did not solve my needs.
The usage construct for what I call RedisMEnQ (En = Enqueue because it uses Redis.EnqueueItemOnList) is to instantiate the pub class for each publisher, and a sub class for each subscriber. The pub and sub classes both own an instance of the exchange class, thus sharing the same exchange code. There is no direct interaction between pub and sub classes except through Redis. The exchange code implements locks so that the various threads are thread safe during exchange transactions, and of course the redis connections are unique for each thread and thus thread safe.
The Exchange code is the most interesting, and quite short, so I thought I'd post it:
public class Subscriber
{
public int id { get; set; }
public string key { get; set; }
public bool active { get; set; }
}
public class RedisQueueExchange
{
private IRedisList<Subscriber> RedisSubscribers;
IRedisClient Redis;
public string Key { get; set; }
public const string SubscriberListKey = "exchange:subscribers";
private int id;
public int ID { get { return id; } private set { id = value; } }
private Object thisLock = new Object(); // Mutuall exclusion lock
public RedisQueueExchange(IRedisClient _redis, string _key)
{
Key = _key;
Redis = _redis;
RedisSubscribers = Redis.As<Subscriber>().Lists[SubscriberListKey];
}
private int addSubscriber(string _key){
Subscriber sub = new Subscriber { id = 0, active = true, key = _key };
List<Subscriber> subscribers = RedisSubscribers.GetAll();
int idx = subscribers.FindIndex(x => !x.active);
if (idx < 0)
{
sub.id = idx = subscribers.Count;
RedisSubscribers.Add(sub);
}
else
{
sub.id = idx;
RedisSubscribers[idx] = sub;
}
return idx;
}
private List<Subscriber> findSubscribers(string key)
{
List<Subscriber> subscribers = RedisSubscribers.GetAll();
return subscribers.FindAll(x => x.key.Equals(key));
}
public int Subscribe(){
lock (thisLock)
{
ID = addSubscriber(Key);
return ID;
}
}
public string getSubscribeKey(int id)
{
return "sub:" + id.ToString() + ":" + Key;
}
public void UnSubscribe(int id)
{
lock (thisLock)
{
List<Subscriber> subscribers = RedisSubscribers.GetAll();
int idx = subscribers.FindIndex(x => x.id == id);
RedisSubscribers[idx].active = false;
}
}
public int pubMsg(string msg)
{
lock (thisLock)
{
List<Subscriber> subList = findSubscribers(Key);
int retVal = subList.Count;
foreach (Subscriber sub in subList)
{
string subkey = "sub:" + sub.id.ToString() + ":" + Key;
Redis.EnqueueItemOnList(subkey, msg);
}
return retVal;
}
}
public void clearExchange(){
if(RedisSubscribers != null )
RedisSubscribers.Clear();
}
}
There are lots of ways to approach the problem, but to understand the code one thing should be clarified. I am reusing subscriber ids. That makes it slightly more complex than it would be otherwise. I didn't want to have unnecessary gaps in subscriber ids, so if a subscriber unsubscribes, the next subscriber will pick up the unused id. I put the locks in so that a subscriber does not partly subscribe while a publisher is publishing. Either the subscriber is fully subscribed or not at all.