for there are lots of data should be put into hazelcast map, I want to prevent reading from others when the data is putting into the map.
is there any way to realize it?
for example:
map a = map(1,000,000,000) // a has 1,000,000,000 elements
map b = map(2,000) // b has 200 emlemnts
i want to put all of b into a ;
the elements of b should be accessed after all of these are put into map a;
if the elements of map b haven't been put into map a entirely, the elements of map b couldn't be accessed.
use case:
map a ={1,2,3,4,5}
map b ={a,b,c,d,e}
print a // result {1,2,3,4,5}
foreach item in b
a.put item
print a // result {1,2,3,4,5}
end foreach
print a //result {1,2,3,4,5,a,b,c,d,e}
i want to merge these two maps.while, map b's elements couldn't be accessed via map a before merging finished.
my solutions
thank all the people for their help.
after reading the hazelcast manual, I choose the transactionalMap to resolve this problem.
transactionalMap is READ_COMMITED islate. it could suspend reading map(1) threads when the transaction is updating map(1).
``` java
static Runnable tx = new Runnable() {
#Override
public void run() {
try {
logger.info("start transaction...");
TransactionContext txCxt = hz.newTransactionContext();
txCxt.beginTransaction();
TransactionalMap<Object, Object> map = txCxt.getMap("map");
try {
logger.info("before put map(1)");
Thread.sleep(300);
map.put("1", "1"); // reader1 is blocked
logger.info("after put map(1)");
Thread.sleep(500);
map.put("2", "2"); // reader2 is blocked
logger.info("after put map(2)");
Thread.sleep(500);
txCxt.commitTransaction();
logger.info("transaction committed");
} catch (RuntimeException t) {
txCxt.rollbackTransaction();
throw t;
}
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
logger.info("Finished testmap size:{}, testmap(1):{}, testmap(2):{} ", testmap.size(), testmap.get("1"),
testmap.get("2"));
Hazelcast.shutdownAll();
logger.info("system exit.");
System.exit(0);
}
}
};
```
What's your motivation / use-case? You can use transactions, but that could have a bad impact on performance. Alternatively you could use manual locking - see ILock.
However both these techniques should be used as a last-resort - when you have no chance to design your application differently.
One way to achieve this is by locking the segments in Map b while adding to it. Once pushing the entries to Map a is complete, you can unlock the segments.
There will be performance implications with this methods though as it requires an extra step of locking/unlocking.
Related
I'm trying to make a multithreaded merge sort and I've encountered a stack overflow error and I'm not sure what is causing it.
public static void concurrentMergeSort(int[] arr, int threadCount) {
if(threadCount <= 1){
regularMergeSort(arr);
return;
}
int middle = arr.length/2;
int[] left = Arrays.copyOfRange(arr, 0, middle); //Says error here
int[] right = Arrays.copyOfRange(arr, middle, arr.length);
concurrentMergeSort(left);//Says error here
concurrentMergeSort(right);
Thread leftSort = new Thread(new Sorting(left, threadCount));
Thread rightSort = new Thread(new Sorting(right, threadCount));
try{
leftSort.join();
rightSort.join();
}
catch (Exception ex){
ex.printStackTrace();
}
merge(arr, left, right);
}
public static void regularMergeSort(int[] arr){
if(arr.length == 1){
return;
}
int middle = arr.length/2;
int[] left = Arrays.copyOfRange(arr, 0, middle);
int[] right = Arrays.copyOfRange(arr, middle, arr.length);
regularMergeSort(left);
regularMergeSort(right);
merge(arr, left, right);
}
}
I was thinking that maybe it was the thread count never decreasing, but when I modify the thread count I still get the same result. Also it was working until I added a regular merge sort and concurrent merge sort to separate it. I only added the regular merge sort as well because I was barely getting a speed increase from just having the concurrent merge sort method and the main purpose of this modification of merge sort is to increase the time it takes to sort with multithreading.
Your return condition from regularMergeSort is:
if(arr.length == 1)
When middle = 0, you will end up creating an empty array; and this terminating condition won't be hit, and there will be infinite loop. Change this condition to:
if(arr.length <= 1)
And assuming your merge function handles empty arrays, you should be good.
It is easy enough in D to create a Queue type using the std.container.dlist.
I would like to have multiple threads but have them communicate with a queue, not with message passing (https://tour.dlang.org/tour/en/multithreading/message-passing). As I understand it the messages are designed to always receive data at particular points in the code; the receiving thread will block until the expected data is received.
(EDIT: I was informed about receiveTimeout but having a no timeout and just a check is really more appropriate in this case (maybe a timeout of 0?). Also I am not sure what the message API will do if multiple messages are sent before any any are received. I will have to play with that.)
void main() {
spawn(&worker, thisTid);
// This line will block until the expected message is received.
receive (
(string message) {
writeln("Received the message: ", text);
},
)
}
What I am needing is to merely receive data if there is some. Something like this:
void main() {
Queue!string queue// custom `Queue` type based on DList
spawn(&worker, queue);
while (true) {
// Go through any messages (while consuming `queue`)
for (string message; queue) {
writeln("Received a message: ", text);
}
// Do other stuff
}
}
I have tried using shared variables (https://tour.dlang.org/tour/en/multithreading/synchronization-sharing) but DMD is complaining that "Aliases to mutable thread-local data not allowed." or some other errors, depending.
How would this be done in D? Or, is there a way to use messages to do this kind of communication?
This doesn't answer the specific question but ti does clear up what I think is a misunderstanding of the message passing api...
just call receiveTimeout instead of plain receive
http://dpldocs.info/experimental-docs/std.concurrency.receiveTimeout.html
I use this:
shared class Queue(T) {
private T[] queue;
synchronized void opOpAssign(string op)(T object) if(op == "~") {
queue ~= object;
}
synchronized size_t length(){
return queue.length;
}
synchronized T pop(){
assert(queue.length, "Please check queue length, is 0");
auto first = queue[0];
queue = queue[1..$];
return first;
}
synchronized shared(T[]) consume(){
auto copy = queue;
queue = [];
return copy;
}
}
I have gotten the answer I need.
Simply put, use core.thread rather than std.concurrency. std.concurrency manages messages for you and does not allow you to manage it yourself. core.thread is what std.concurrency uses internally.
The longer answer, here is how I fully implemented it.
I have created a Queue type that is based on an Singly Linked List but maintains a pointer of the last element. The Queue also uses standard component inputRange and outputRange (or at least I think it does) per Walter Brights vision (https://www.youtube.com/watch?v=cQkBOCo8UrE).
The Queue is also built to allow one thread to write and another to read with very little mutexing internally so it should be fast.
The Queue I shared here https://pastebin.com/ddyPpLrp
A simple implementation to have a second thread read input:
Queue!string inputQueue = new Queue!string;
ThreadInput threadInput = new ThreadInput(inputQueue);
threadInput.start;
while (true) {
foreach (string value; inputQueue) {
writeln(value);
}
}
ThreadInput being defined as thus:
class ThreadInput : Thread {
private Queue!string queue;
this(Queue!string queue) {
super(&run);
this.queue = queue;
}
private void run() {
while (true) {
queue.put(readln);
}
}
}
The code https://pastebin.com/w5jwRVrL
The Queue again https://pastebin.com/ddyPpLrp
Let me setup this question with some background information, we have a long running process which will be generating data in a Windows Form. So, obviously some form of multi-threading is going to be needed to keep the form responsive. But, we also have the requirement that the form updates as many times per second while still remaining responsive.
Here is a simple test example using background worker thread:
void bw_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
int reportValue = (int)e.UserState;
label1.Text = reportValue;
//We can put this.Refresh() here to force repaint which gives us high repaints but we lose
//all other responsiveness with the control
}
void bw_DoWork(object sender, DoWorkEventArgs e)
{
for (int x = 0; x < 100000; x++)
{
//We could put Thread.Sleep here but we won't get highest performance updates
bw.ReportProgress(0, x);
}
}
Please see the comments in the code. Also, please don't question why I want this. The question is simple, how do we achieve the highest fidelity (most repaints) in updating the form while maintaining responsiveness? Forcing the repaint does give us updates but we don't process windows messages.
I have also try placing DoEvents but that produces stack overflow. What I need is some way to say, "process any windows messages if you haven't lately". I can see also that maybe a slightly different pattern is needed to achieve this.
It seems we need to handle a few issues:
Updating the Form through the non UI thread. There are quite a few solution to this problem such as invoke, synchronization context, background worker pattern.
The second problem is flooding the Form with too many updates which blocks the message processing and this is the issue around which my question really concerns. In most examples, this is handles trivially by slowing down the requests with an arbitrary wait or only updating every X%. Neither of these solutions are approriate for real-world applications nor do they meet the maximum update while responsive criteria.
Some of my initial ideas on how to handle this:
Queue the items in the background worker and then dispatch them in a UI thread. This will ensure every item is painted but will result in lag which we don't want.
Perhaps use TPL
Perhaps use a timer in the UI thread to specify a refresh value. In this way, we can grab the data at the fastest rate that we can process. It will require accessing/sharing data across threads.
Update, I've updated to use a Timer to read a shared variable with the Background worker thread updates. Now for some reason, this method produces a good form response and also allows the background worker to update about 1,000x as fast. But, interestingly it only 1 millisecond accurate.
So we should be able to change the pattern to read the current time and call the updates from the bw thread without the need for the timer.
Here is the new pattern:
//Timer setup
{
RefreshTimer.SynchronizingObject = this;
RefreshTimer.Elapsed += RefreshTimer_Elapsed;
RefreshTimer.AutoReset = true;
RefreshTimer.Start();
}
void bw_DoWork(object sender, DoWorkEventArgs e)
{
for (int x = 0; x < 1000000000; x++)
{
//bw.ReportProgress(0, x);
//mUiContext.Post(UpdateLabel, x);
SharedX = x;
}
}
void RefreshTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
label1.Text = SharedX.ToString();
}
Update And here we have the new solution that doesn't require the timer and doesn't block the thread! We achieve a high performance in calculations and fidelity on the updates with this pattern. Unfortunately, ticks TickCount is only 1 MS accurate, however we can run a batch of X updates per MS to get faster then 1 MS timing.
void bw_DoWork(object sender, DoWorkEventArgs e)
{
long lastTickCount = Environment.TickCount;
for (int x = 0; x < 1000000000; x++)
{
if (Environment.TickCount - lastTickCount > 1)
{
bw.ReportProgress(0, x);
lastTickCount = Environment.TickCount;
}
}
}
There is little point in trying to report progress any faster than the user can keep track of it.
If your background thread is posting messages faster than the GUI can process them, (and you have all the symtoms of this - poor GUI resonse to user input, DoEvents runaway recursion), you have to throttle the progress updates somehow.
A common approach is to update the GUI using a main-thread form timer at a rate sufficiently small that the user sees an acceptable progress readout. You may need a mutex or critical section to protect shared data, though that amy not be necessary if the progress value to be monitored is an int/uint.
An alternative is to strangle the thread by forcing it to block on an event or semaphore until the GUI is idle.
The UI thread should not be held for more than 50ms by a CPU-bound operation taking place on it ("The 50ms Rule"). Usually, the UI work items are executed upon events, triggered by user input, completion of an IO-bound operation or a CPU-bound operation offloaded to a background thread.
However, there are some rare cases when the work needs to be done on the UI thread. For example, you may need to poll a UI control for changes, because the control doesn't expose proper onchange-style event. Particularly, this applies to WebBrowser control (DOM Mutation Observers are only being introduced, and IHTMLChangeSink doesn't always work reliably, in my experience).
Here is how it can be done efficiently, without blocking the UI thread message queue. A few key things was used here to make this happen:
The UI work tasks yields (via Application.Idle) to process any pending messages
GetQueueStatus is used to decide on whether to yield or not
Task.Delay is used to throttle the loop, similar to a timer event. This step is optional, if the polling needs to be as precise as possible.
async/await provide pseudo-synchronous linear code flow.
using System;
using System.Threading;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace WinForms_21643584
{
public partial class MainForm : Form
{
EventHandler ContentChanged = delegate { };
public MainForm()
{
InitializeComponent();
this.Load += MainForm_Load;
}
// Update UI Task
async Task DoUiWorkAsync(CancellationToken token)
{
try
{
var startTick = Environment.TickCount;
var editorText = this.webBrowser.Document.Body.InnerText;
while (true)
{
// observe cancellation
token.ThrowIfCancellationRequested();
// throttle (optional)
await Task.Delay(50);
// yield to keep the UI responsive
await ApplicationExt.IdleYield();
// poll the content for changes
var newEditorText = this.webBrowser.Document.Body.InnerText;
if (newEditorText != editorText)
{
editorText = newEditorText;
this.status.Text = "Changed on " + (Environment.TickCount - startTick) + "ms";
this.ContentChanged(this, EventArgs.Empty);
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
async void MainForm_Load(object sender, EventArgs e)
{
// navigate the WebBrowser
var documentTcs = new TaskCompletionSource<bool>();
this.webBrowser.DocumentCompleted += (sIgnore, eIgnore) => documentTcs.TrySetResult(true);
this.webBrowser.DocumentText = "<div style='width: 100%; height: 100%' contentEditable='true'></div>";
await documentTcs.Task;
// cancel updates in 10 s
var cts = new CancellationTokenSource(20000);
// start the UI update
var task = DoUiWorkAsync(cts.Token);
}
}
// Yield via Application.Idle
public static class ApplicationExt
{
public static Task<bool> IdleYield()
{
var idleTcs = new TaskCompletionSource<bool>();
if (IsMessagePending())
{
// register for Application.Idle
EventHandler handler = null;
handler = (s, e) =>
{
Application.Idle -= handler;
idleTcs.SetResult(true);
};
Application.Idle += handler;
}
else
idleTcs.SetResult(false);
return idleTcs.Task;
}
public static bool IsMessagePending()
{
// The high-order word of the return value indicates the types of messages currently in the queue.
return 0 != (GetQueueStatus(QS_MASK) >> 16 & QS_MASK);
}
const uint QS_MASK = 0x1FF;
[System.Runtime.InteropServices.DllImport("user32.dll")]
static extern uint GetQueueStatus(uint flags);
}
}
This code is specific to WinForms. Here is a similar approach for WPF.
I am writing very large (both size and count) documents to a solr index(100s of fields with many numeric and some text fields) . I am using Tomcat 7 on W7 x64.
Based on #Maurico's suggestion when indexing millions of documents I parallelize the write operation (see code sample below)
The write to Solr method is being "Task"ed out from a main loop (Note: I task it out since the write op takes too long and holds up the main app)
The problem is that the memory consumption grows uncontrollably, the culprit is the solr write operations (when I comment them out the run works fine). How do I handle this issue? via Tomcat? or SolrNet?
Thanks for your suggestions.
//main loop:
{
:
:
:
//indexDocsList is the list I create in main loop and "chunk" it out to send to the task.
List<IndexDocument> indexDocsList = new List<IndexDocument>();
for(int n = 0; n< N; n++)
{
indexDocsList.Add(new IndexDocument{X=1, Y=2.....});
if(n%5==0) //every 5th time we write to solr
{
var chunk = new List<IndexDocument>(indexDocsList);
indexDocsList.Clear();
Task.Factory.StartNew(() => WriteToSolr(chunk)).ContinueWith(task => chunk.Clear());
GC.Collect();
}
}
}
private void WriteToSolr(List<IndexDocument> indexDocsList)
{
try
{
if (indexDocsList == null) return;
if (indexDocsList.Count <= 0) return;
int fromInclusive = 0;
int toExclusive = indexDocsList.Count;
int subRangeSize = 25;
//TO DO: This is still leaking some serious memory, need to fix this
ParallelLoopResult results = Parallel.ForEach(Partitioner.Create(fromInclusive, toExclusive, subRangeSize), (range) =>
{
_solr.AddRange(indexDocsList.GetRange(range.Item1, range.Item2 - range.Item1));
_solr.Commit();
});
indexDocsList.Clear();
GC.Collect();
}
catch (Exception ex)
{
logger.ErrorException("WriteToSolr()", ex);
}
finally
{
GC.Collect();
};
return;
}
You are manually committing after each batch. This is the most expensive operation for Solr. In your case, I would recommend autoCommit every x seconds and do a softAutoCommit (Solr 4.0) feature. That should take care of Solr's side of things. You'll also have to tweak your JVM garbage collection options so that you don't get pause the world GC.
I have a Silverlight app. that has a basic animation where a rectangle is animated to a new position. The animation consists of two DoubleAnimation() - one transforms the X, the other transforms the Y. It works OK.
I basically want to block any other calls to this animate method until the first two animations have completed. I see that the DoubleAnimation() class has a Completed event it fires but I haven't been successful in constructing any kind of code that successfully blocks until both have completed.
I attempted to use Monitor.Enter on a private member when entering the method, then releasing the lock from one of the animations Completed event, but my attempts at chaining the two events (so the lock isn't released until both have completed) haven't been successful.
Here's what the animation method looks like:
public void AnimateRectangle(Rectangle rect, double newX, double newY)
{
var xIsComplete = false;
Duration duration = new Duration(new TimeSpan(0, 0, 0, 1, 350));
var easing = new ElasticEase() { EasingMode = EasingMode.EaseOut, Oscillations = 1, Springiness = 4 };
var animateX = new DoubleAnimation();
var animateY = new DoubleAnimation();
animateX.EasingFunction = easing;
animateX.Duration = duration;
animateY.EasingFunction = easing;
animateY.Duration = duration;
var sb = new Storyboard();
sb.Duration = duration;
sb.Children.Add(animateX);
sb.Children.Add(animateY);
Storyboard.SetTarget(animateX, rect);
Storyboard.SetTargetProperty(animateX, new PropertyPath("(Canvas.Left)"));
Storyboard.SetTarget(animateY, rect);
Storyboard.SetTargetProperty(animateY, new PropertyPath("(Canvas.Top)"));
animateX.To = newX;
animateY.To = newY;
sb.Begin();
}
EDIT (added more info)
I ran into this initially because I was calling this method from another method (as it processed items it made a call to the animation). I noticed that the items didn't end up where I expected them to. The new X/Y coordinates I pass in are based on the items current location, so if it was called multiple times before it finished, it ended up in the wrong location. As a test I added a button that only ran the animation once. It worked. However, if I click on the button a bunch of times in a row I see the same behavior as before: items end up in the wrong location.
Yes, it appears Silverlight animations are run on the main UI thread. One of the tests I tried I added two properties that flagged whether both animations had completed yet. In the AnimateRectange() method I checked them inside of a while loop (calling Thread.Sleep). This loop never completed (so it's definitely on the same thread).
So I created a queue to process the animations in order:
private void ProcessAnimationQueue()
{
var items = this.m_animationQueue.GetEnumerator();
while (items.MoveNext())
{
while (this.m_isXanimationInProgress || this.m_isYanimationInProgress)
{
System.Threading.Thread.Sleep(100);
}
var item = items.Current;
Dispatcher.BeginInvoke(() => this.AnimateRectangle(item.Rect.Rect, item.X, item.Y));
}
}
Then I call my initial routine (which queues up the animations) and call this method on a new thread. I see the same results.
As far as I am aware all of the animations in Silverlight are happening on the UI thread anyway. I am guessing that only the UI thread is calling this animation function anyway, so I am not sure that using locking will help. Do you really want to be blocking the entire thread or just preventing another animation from starting?
I would suggest something more like this:
private bool isAnimating = false;
public void AnimateRectangle(Rectangle rect, double newX, double newY)
{
if (isAnimating)
return;
// rest of animation code
sb.Completed += (sender, e) =>
{
isAnimating = false;
};
isAnimating = true;
sb.Begin();
}
Just keep track of whether or not you are currently animating with a flag and return early if you are. If you don't want to lose potential animations your other option is to keep some kind of a queue for animation which you could check/start when each animation has completed.
This question really peaked my interest. In fact I'm going to include it in my next blog post.
Boiling it down, just to be sure we are talking about the same thing, fundementally you don't want to block the call to AnimateRectangle you just want to "queue" the call so that once any outstanding call has completed its animation this "queued" call gets executed. By extension you may need to queue several calls if a previous call hasn't even started yet.
So we need two things:-
A means to treat what are essentially asynchronous operations (sb.Begin to Completed event) as a sequential operation, one operation only starting when the previous has completed.
A means to queue additional operations when one or more operations are yet to complete.
AsyncOperationService
Item 1 comes up in a zillion different ways in Silverlight due to the asynchronous nature of so many things. I solve this issue with a simple asynchronous operation runner blogged here. Add the AsyncOperationService code to your project.
AsyncOperationQueue
Its item 2 that really took my interest. The variation here is that whilst an existing set of operations are in progress there is demand to add another. For a general case solution we'd need a thread-safe means of including another operation.
Here is the bare-bones of a AsyncOperationQueue:-
public class AsyncOperationQueue
{
readonly Queue<AsyncOperation> myQueue = new Queue<AsyncOperation>();
AsyncOperation myCurrentOp = null;
public void Enqueue(AsyncOperation op)
{
bool start = false;
lock (myQueue)
{
if (myCurrentOp != null)
{
myQueue.Enqueue(op);
}
else
{
myCurrentOp = op;
start = true;
}
}
if (start)
DequeueOps().Run(delegate { });
}
private AsyncOperation GetNextOperation()
{
lock (myQueue)
{
myCurrentOp = (myQueue.Count > 0) ? myQueue.Dequeue() : null;
return myCurrentOp;
}
}
private IEnumerable<AsyncOperation> DequeueOps()
{
AsyncOperation nextOp = myCurrentOp;
while (nextOp != null)
{
yield return nextOp;
nextOp = GetNextOperation();
}
}
}
Putting it to use
First thing to do is convert your existing AnimateRectangle method into a GetAnimateRectangleOp that returns a AsyncOperation. Like this:-
public AsyncOperation GetAnimateRectangleOp(Rectangle rect, double newX, double newY)
{
return (completed) =>
{
// Code identical to the body of your original AnimateRectangle method.
sb.Begin();
sb.Completed += (s, args) => completed(null);
};
}
We need to hold an instance of the AsyncOperationQueue:-
private AsyncOperationQueue myAnimationQueue = new AsyncOperationQueue();
Finally we need to re-create AnimateRectangle that enqueues the operation to the queue:-
public void AnimateRectangle(Rectangle rect, double newX, double newY)
{
myAnimationQueue.Enqueue(GetAnimateRectangleOp(rect, newX, newY)
}