how to include a sleep function in Yew app? - rust

I am trying to build small application with Yew (Rustwasm) . I would like to put sleep function in Yew app.when I use use std::thread::sleep , I am getting below error
I am using sleep as below
let mut index = 0;
sleep(Duration::new(1, 0));
if col < 3 {
index = row * 4 + (col + 1);
if self.cellule[index].value == 1 {
sleep(Duration::new(1, 0));
wasm.js:314 panicked at 'can't sleep', src/libstd/sys/wasm/thread.rs:26:9
Stack:
Error
at imports.wbg.__wbg_new_59cb74e423758ede (http://127.0.0.1:8080/wasm.js:302:19)
at console_error_panic_hook::hook::hd38f373f442d725c (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[117]:0x16a3e)
at core::ops::function::Fn::call::hf1476807b3d9587d (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[429]:0x22955)
at std::panicking::rust_panic_with_hook::hb07b303a83b6d242 (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[211]:0x1ed0d)
at std::panicking::begin_panic::h97f15f2442acdda4 (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[321]:0x21ee0)
at std::sys::wasm::thread::Thread::sleep::hdd97a2b229644713 (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[406]:0x22829)

The methods like thread::sleep doesn't work, because in the JS environment you have a single thread only. If you call that sleep you will block the app completely.
If you want to use an interval you should "order" a callback. You can check the following example how to use TimeoutService or IntervalService for that: yew/examples/timer
The core idea to create a service this way:
let handle = TimeoutService::spawn(
Duration::from_secs(3),
self.link.callback(|_| Msg::Done),
);
// Keep the task or timer will be cancelled
self.timeout_job = Some(handle);
Now you can use handler of Msg::Done to react to that timer elapsed.
Threads are actually available, but it's a complex topic and you have to use Web Workers API reach them. Anyway it's useless for your case. Also there are some proposals in standards, but they aren't available in the browsers yet.

Related

How to break a JS script in case of too long execution time?

I have a Node.js script, which uses some external library. The problem is with this function it freezes (probably some infinite loop inside) for a specific arguments. Example code:
const Elemenets = [/*...*/];
for(let i=0 ; i < Elements.length ; i++) {
console.log(`Running ${i}/${Elements.length} ...`);
someExternalFunction(Elements[i]);
}
My console:
Running 1/352 ...
Running 2/352 ...
Running 3/352 ...
Running 4/352 ...
Running 5/352 ...
Running 6/352 ...
// the script freezees here
Is there any way to do something like:
if someExternalFunction takes longer than 10 seconds, break it and continue the loop
If the function cannot work properly for some arguments I can handle it. But I don't want one damaged element to freeze the entire loop.
If there is no possibility to solve it this way, maybe there is some another approach to this problem?
Thanks

SwtichToContext does not return to original thread

I'm working with an API that can only access its objects on the main thread, so I need to create a new thread to be used for my GUI and then swap back to the original thread for any lengthy calculations involving the API.
So far I have the following code:
[<EntryPoint; STAThread>]
let main _ =
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Inital thread")
let initCtx = SynchronizationContext.Current
let uiThread = new Thread(fun () ->
let guiCtx = SynchronizationContext.Current
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - New UI thread")
async{
do! Async.SwitchToContext initCtx
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Back to initial thread")
// Lengthy API calculation here
do! Async.SwitchToContext guiCtx
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Back to UI thread")
} |> Async.RunSynchronously
)
uiThread.SetApartmentState(ApartmentState.STA)
uiThread.Start()
1
However when I run this I get the output:
[1] - Inital thread
[4] - New UI thread
[5] - Back to initial thread
[5] - Back to UI thread
So it doesn't seem to be switching contexts the way I would expect. How can I switch back to the original thread after creating a new thread this way?
I have tried calling
SynchronizationContext.SetSynchronizationContext(new DispatcherSynchronizationContext(Dispatcher.CurrentDispatcher)) first to ensure that the original thread has a valid SynchronizationContext but that causes the program to exit at the Async.SwitchToContext lines without throwing any exception.
I have also tried using Async.StartImmediate instead of RunSynchronously with the same result.
If I try both of these at the same time then the program just freezes up at the Async.SwitchToContext lines instead of exiting out.

How many threads does the uno-platform support for WASM (webassembly) projects?

I am trying to determine if the uno-platform will work for a specific project that I want to deploy as a WebAssembly app.
I've played around with Uno and have configured the newly available threading support using:
<MonoWasmRuntimeConfiguration>threads-release</MonoWasmRuntimeConfiguration>
As I understand it, threading support is only available in Chrome and Edge at this time...but that is ok for my needs.
I've created a simple button that should spin up 10 worker threads like so:
private void Button_Click(object sender, RoutedEventArgs e)
{
for (int i = 0; i < 10; i++)
{
var t = new Thread(() => DoWork());
t.Start();
}
}
private void DoWork()
{
for (int n = 0; n < 10000; n++)
{
Console.WriteLine($"Task {Thread.CurrentThread.ManagedThreadId} running {n}");
}
}
This code is the only thing added to the default "hello world" WASM project from the uno-platform templates.
When looking at the output, I can see that 2 threads are working as expected and interweaving results. (prior to the <MonoWasmRuntimeConfiguration>threads-release</MonoWasmRuntimeConfiguration> the output would be synchronous for a single "thread" and then synchronous for the next "thread")
Task 3 running 9987
Task 2 running 9843
Task 3 running 9988
At the completion of both threads, the follow warning is logged in the browser console.
PThread 70424616 is attempting to join to itself!
__emscripten_do_pthread_join # dotnet.js:1
_pthread_join # dotnet.js:1
mono_native_thread_join # 00806cba:0x70078
threads_native_thread_join_lock # 00806cba:0xbfd98
mono_threads_join_threads # 00806cba:0x6aefe
mono_runtime_do_background_work # 00806cba:0xdb520
mono_background_exec # 00806cba:0xf89c5
Module._mono_background_exec # dotnet.js:1
pump_message # dotnet.js:1
setTimeout (async)
_schedule_background_exec # dotnet.js:1
mono_threads_schedule_background_job # 00806cba:0x1612c
mono_threads_add_joinable_runtime_thread # 00806cba:0xd8aa4
sgen_client_thread_detach_with_lock # 00806cba:0xc67bd
thread_detach_with_lock # 00806cba:0xc0142
unregister_thread # 00806cba:0x5e469
mono_thread_info_detach # 00806cba:0xd98c6
mono_thread_info_exit # 00806cba:0x49724
start_wrapper # 00806cba:0xc1206
dynCall_ii # 00806cba:0x11d3d7
Module.dynCall_ii # dotnet.js:1
onmessage # dotnet.worker.js:1
Two questions arise which I can't find documentation about.
Why only 2 threads? (on both Chrome and Edge) Is this a browser limitation? A setting in Uno-platform?
Why do the remaining 8 threads not get started? They are essentially lost and none of the work gets performed. Is this a bug in the uno-platform? Mono? emscripten possibly?
I know threading is still experimental, but I am curious if there is a known limitation that I am hitting?
The number two may come from the default provided by emscripten, with the default number of workers provided before the main of the application. You can change the default number of threads with this:
<ItemGroup>
<WasmShellExtraEmccFlags Include="-s PTHREAD_POOL_SIZE=10"/>
</ItemGroup>
The support for threads is likely to improve a lot during the .NET 6 timeframe, so this may change signficantly.

Openresty concurrent requests

I would like to use OpenResty with Lua interpreter.
I can't make the OpenResty framework to handle two concurrent requests to two separate endpoints. I simulate that one request is doing some hard calculations by running in a long loop:
local function busyWaiting()
local self = coroutine.running()
local i = 1
while i < 9999999 do
i = i + 1
coroutine.yield(self)
end
end
local self = coroutine.running()
local thread = ngx.thread.spawn(busyWaiting)
while (coroutine.status(thread) ~= 'zombie') do
coroutine.yield(self)
end
ngx.say('test1!')
The other endpoint just sends response immediately.
ngx.say('test2')
I send a request to the first endpoint and then I send a second request to the second endpoint. However, the OpenResty is blocked by the first request and so I receive both responses almost at the same time.
Setting nginx parameter worker_processes 1; to higher number does not help either and I would like to have only single worker process anyway.
What is the proper way to let OpenResty handle additional requests and not to get blocked by the first request?
local function busyWaiting()
local self = ngx.coroutine.running()
local i = 1
while i < 9999999 do
i = i + 1
ngx.coroutine.yield(self)
end
end
local thread = ngx.thread.spawn(busyWaiting)
while (ngx.coroutine.status(thread) ~= 'dead') do
ngx.coroutine.resume(thread)
end
ngx.say('test1!')

Perl Reading from Thread::Queue with timeout

I am working in a boss worker crew multithreaded scenario with Thread::Queue in Perl.
The boss enqueues tasks and the workers dequeue from the queue.
I need to achieve that the worker crew sends downstream ping messages in case the boss does not send a task via the queue for x seconds.
Unfortunately there seems to be no dequeue method with a timeout.
Have I missed something or would you recommend a different approach/different data structure?
You can add the functionality yourself, knowing that a Thread::Queue object is a blessed reference to a shared array (which I believe is the implementation from 5.8 through 5.16):
package Thread::Queue::TimedDequeue;
use parent 'Thread::Queue';
use threads::shared qw(lock cond_timedwait);
sub timed_dequeue {
my ($q, $patience) = #_; # XXX revert to $q->dequeue() if $patience is negative?
# $q->dequeue_nb() if $patience is zero?
my $timelimit = time() + $patience;
lock(#$q);
until (#$q) {
last if !cond_timedwait(#$q, $timelimit);
}
return shift if #$q; # We got an element
# else we timed out.
}
1;
Then you'd do something like:
# main.pl
use threads;
use strict; use warnings;
use Thread::Queue::TimedDequeue;
use constant WORKER_PATIENCE => 10; # 10 seconds
my $queue = Thread::Queue::TimedDequeue->new();
...
sub worker {
my $item = $queue->dequeue(WORKER_PATIENCE);
timedout() unless $item;
...
}
Note that the above approach assumes you do not enqueue undef or an otherwise false value.
There is nothing wrong with your approach/structure, you just need to add some timeout control over your "Thread::Queue". That is either:
create some "yield" based loop to check your queue from the child side while using a time reference to detect timeout.
use the "Thread::Queue::Duplex" or "Thread::Queue::Multiplex" modules which might be a bit overill but do implement timeout controls.

Resources