Swift Combine - delaying a publisher - delay

TL;DR
I want to delay a publication, but can't figure out how to, er, combine the parts
In Brief
I have a Publisher
let generator = PassthroughSubject<Bool, Never>()
and want somehow to use the modifier
.delay(for: 2, scheduler: RunLoop.main)
so that when I call
generator.send(true)
the message is sent two seconds after the call to send()
Looking at the docs for Publishers.Delay made the type error more clear, but doesn't help me to find the right way to hook things up.
Code
import SwiftUI
import Combine
// Exists just to subscribe.
struct ContainedView : View {
private let publisher: AnyPublisher<Bool, Never>
init(_ publisher: AnyPublisher<Bool, Never> = Just(false).dropFirst().eraseToAnyPublisher()) {
self.publisher = publisher
}
var body: some View {
Rectangle().onReceive(publisher) { _ in print("Got it") }
}
}
struct ContentView: View {
let generator = PassthroughSubject<Bool, Never>()
// .delay(for: 2, scheduler: RunLoop.main)
// Putting it here doesn't work either.
var body: some View {
VStack {
Button("Tap") {
// Does not compile
self.generator.delay(for: 2, scheduler: RunLoop.main).send(true)
// Value of type 'Publishers.Delay<PassthroughSubject<Bool, Never>, RunLoop>' has no member 'send'
// https://developer.apple.com/documentation/combine/publishers/delay
// Does not compile
self.generator.send(true).delay(for: 2, scheduler: RunLoop.main)
// Value of tuple type '()' has no member 'delay'
// Just a broken-up version of the first try.
let delayed = self.generator.delay(for: 2, scheduler: RunLoop.main)
delayed.send(true)
// This, of course, builds and works.
self.generator.send(true)
print("Sent it")
}
ContainedView(generator.eraseToAnyPublisher())
.frame(width: 300, height: 200)
}
}
}

You can use the debounce property of a publisher to delay the publishing.
$yourProperty
.debounce(for: 0.8, scheduler: RunLoop.main)
.eraseToAnyPublisher()

.delay(for: 2, scheduler: RunLoop.main) is likely exactly what you need, but it'll be key to see how you're subscribing to fully understand the issue. Delay doesn't delay the sending of the value when using send() with a subject - that's a link for imperative code and sends the data whenever send is invoked, typically against some already existing subscription.
While you have a subscriber in the first bit of code, there isn't one with the subject to pin these together.
For example, if you updated:
Just(false).dropFirst().eraseToAnyPublisher()
to
Just(false).dropFirst().eraseToAnyPublisher().delay(for: 2, scheduler: RunLoop.main)
Then the print statement should trigger ~2 second after the the init() was invoked. Depending on what you're trying to accomplish here, using a closure trigger such as onAppear might make a lot more sense, having that call the subject.send(), which you can then delay as you like in the publisher chain that happens before whatever subscribes to it.

var cancellables: [AnyCancellable] = []
let generator = PassthroughSubject<Bool, Never>()
let generated = generator.delay(for: 2, scheduler: RunLoop.main).sink { value in
print(value.description + " " + Date().timeIntervalSinceReferenceDate.description)
}
print(Date().timeIntervalSinceReferenceDate.description)
generator.send(true)
generator.send(false)
output
641453284.840604
true 641453286.841731
false 641453286.847715

Related

Using wasm_timer in Yew to execute callback repeatedly

I'm still rather new to Rust and have a hard time wrapping my head around futures. I want to implement a "timer app" in the browser and to do so I'm using https://yew.rs/. For the timer I tried to use https://github.com/tomaka/wasm-timer/, but there are not docs and no examples. Looks like the usage is supposed to be obvious, but I don't get it.
I assume that I have to do something like:
let i = Interval::new(core::time::Duration::from_millis(250));
This should create an Interval that fires every 250ms. But what is fired? How to I specify my callback? I would expect something like:
i.somehow_specify_callback(|| { ... executed every 250ms ...});
My feeling is, that I'm somehow on the wrong path and do not get grasp Rust futures. A working example on how to make an Interval execute some code would be very appreciated.
Here is a pseudo code example for Timer component:
enum SecondsStateAction {
Increment,
}
#[derive(Default)]
struct SecondsState {
seconds: usize,
}
impl Reducible for SecondsState {
/// Reducer Action Type
type Action = SecondsStateAction;
/// Reducer Function
fn reduce(self: Rc<Self>, action: Self::Action) -> Rc<Self> {
match action {
SecondsStateAction::Increment => Self { seconds: self.seconds + 1 }.into(),
}
}
}
#[function_component(Timer)]
pub fn timer() -> Html {
let seconds_state_handle = use_reducer(SecondsState::default);
use_effect_with_deps(
{
let seconds_state_handle = seconds_state_handle.clone();
move |_| {
// i intervals get out of scope they get dropped and destroyed
let interval = Interval::new(1000, move || seconds_state_handle.dispatch(SecondsStateAction::Increment));
// So we move it into the clean up function, rust will consider this still being used and wont drop it
// then we just drop it ourselves in the cleanup
move || drop(interval)
}
},
(), // Only create the interval once per your component existence
);
html! {<h1>{*seconds_state_handle}{" seconds has passed since this component got rendered"}</h1>}
}
to learn more about the hooks i used in the code visit https://yew.rs/docs/concepts/function-components/pre-defined-hooks

how to access event parameters in unit test frame 0.9.12

Hi all I am trying to access the values passed in the event .
I have extract the event with this - not this is substrate 0.9.12 so I have not been able to use some of the examples I find online that use substrate 2.0.0
let e = &frame_system::Pallet::<Test>::events()[0];
let EventRecord { event, .. } = e;
And this is the structure of event
Event::MosaicVault(
Event::VaultCreated {
sender: 1,
asset_id: A,
vault_id: 1,
reserved: Perquintill(
1000000000000000000,
),
},
)
How do I access vault_id value , a sample would be helpful , thanks
This is called destructuring of enum (if you want to search online for more example).
Basically you can use some match or if let to get the inner fields of some variants of enums.
something a bit like this:
if let Event::MosaicVault(Event::VaultCreated { vault_id, .. }) = event {
// here you have access to vault_id
}

Chrome Extension | Multiple alarms going off at once

I am creating a task reminder extension. The user has an option to keep adding tasks and set reminders for each task.
I am using chrome.storage to store these tasks and using onChanged listener on storage to create an alarm for each task added to the storage.
But the issue is that if I set a reminder of 2 mins for a task and 3 mins for another task. Then at the end of 2 mins I am getting notification for both the tasks and at the end of 3mins I again get notifications for both the tasks.
background.js
chrome.storage.onChanged.addListener(function(changes, namespace) {
let id = (changes.tasks.newValue.length)-1
let data = changes.tasks.newValue[id]
if(data.task && data.hrs && data.min){
let totalMins = (parseInt(data.hrs*60))+parseInt(data.min)
let alarmTime = 60*1000*totalMins
chrome.alarms.create("remind"+id,{when:Date.now() + alarmTime})
}
chrome.alarms.onAlarm.addListener(()=>{
let notifObj = {
type: "basic",
iconUrl:"./images/logo5.png",
title: "Time to complete you task",
message: data.task
}
chrome.notifications.create('remindNotif'+id, notifObj)
})
popup.js
let hrs = document.querySelector("#time-hrs")
let min = document.querySelector("#time-min")
let submitBtn = document.querySelector("#submitBtn")
let task = document.querySelector("#task")
hrs.value = 0;
min.value = 1
hrs.addEventListener('change',()=>{
if (hrs.value < 0){
hrs.value =0;
}
})
min.addEventListener('change',()=>{
if (min.value < 1){
min.value = 1;
}
})
submitBtn.addEventListener("click", ()=>{
if(task.value){
chrome.storage.sync.get('tasks',(item)=>{
let taskArr = item.tasks ? item.tasks : []
linkArr.push({task:task.value, hrs:hrs.value, min:min.value})
chrome.storage.sync.set({ 'tasks' : taskArr })
})
};
});
manifest.json
{
"name" : "Link Snooze",
"description" : "This extension reminds you to open your saved links",
"manifest_version":2,
"version":"0.1.0",
"icons":{
"16":"./images/logo5.png",
"48":"./images/logo5.png",
"128":"./images/logo5.png"
},
"browser_action":{
"default_popup":"popup.html",
"default_icon":"./images/logo5.png"
},
"permissions":["storage", "notifications","alarms"],
"background" : {
"scripts": ["background.js"],
"persistent" : false
},
"options_page":"options.html"
}
Problem.
You register a new onAlarms listener when the storage changes in addition to the old listeners. All of them run each time one alarm is triggered.
Solution.
When using a non-persistent background script, all API listeners must be registered just once for the same function and it must be done synchronously, not inside an asynchronous callback or await or then(), otherwise the event will be lost when the background script auto-terminates and then wakes up for this event. The convention is to do it at the beginning of the script. The reason it worked for you until now is that the background script is kept alive while the popup is open or while devtools for the background script was open.
Such listeners evidently won't be able to use the variables from an asynchronous callback directly like data.task in your code. The solution is to use a different method of attaching data to an event, for example, create the alarm with a name that already contains the data, specifically data.task.
chrome.alarms.create(data.task, {delayInMinutes: hrs * 60 + min});
onAlarm event provides the alarm as a parameter so you can use its name, see the documentation.
Random hints:
An object can be used as an alarm name if you call JSON.stringify(obj) when creating and JSON.parse(alarm.name) in onAlarm.
In the popup, instead of manually adjusting out-of-range values, use a number input in html:
<input id="time-min" type=number min=0 max=59 step=1>
Then read it as a number: document.querySelector("#time-min").valueAsNumber || 0

getting payload from a substrate event back in rust tests

i've created my first substrate project successful and the built pallet also works fine. Now i wanted to create tests for the flow and the provided functions.
My flow is to generate a random hash and store this hash associated to the sender of the transaction
let _sender = ensure_signed(origin)?;
let nonce = Nonce::get();
let _random_seed = <randomness_collective_flip::Module<T>>::random_seed();
let random_hash = (_random_seed, &_sender, nonce).using_encoded(T::Hashing::hash);
ensure!(!<Hashes<T>>::contains_key(random_hash), "This new id already exists");
let _now = <timestamp::Module<T>>::get();
let new_elem = HashElement {
id: random_hash,
parent: parent,
updated: _now,
created: _now
};
<Hashes<T>>::insert(random_hash, new_pid);
<HashOwner<T>>::insert(random_hash, &_sender);
Self::deposit_event(RawEvent::Created(random_hash, _sender));
Ok(())
works good so far, when now i want to test the flow with a written test, i want to check if the hash emitted in the Created event is also assigned in the HashOwner Map. For this i need to get the value out of the event back.
And this is my problem :D i'm not professional in rust and all examples i found are expecting all values emitted in the event like this:
// construct event that should be emitted in the method call directly above
let expected_event = TestEvent::generic_event(RawEvent::EmitInput(1, 32));
// iterate through array of `EventRecord`s
assert!(System::events().iter().any(|a| a.event == expected_event));
When debugging my written test:
assert_ok!(TemplateModule::create_hash(Origin::signed(1), None));
let events = System::events();
let lastEvent = events.last().unwrap();
let newHash = &lastEvent.event;
i see in VSCode that the values are available:
debug window of vs code
but i dont know how to get this Hash in a variable back... maybe this is only a one liner ... but my rust knowledge is damn too small :D
thank you for your help
Here's a somewhat generic example of how to parse and check events, if you only care about the last event that your module put in system and nothing else.
assert_eq!(
System::events()
// this gives you an EventRecord { event: ..., ...}
.into_iter()
// map into the inner `event`.
.map(|r| r.event)
// the inner event is like `OuterEvent::mdouleEvent(EventEnum)`. The name of the outer
// event comes from whatever you have placed in your `delc_event! {}` in test mocks.
.filter_map(|e| {
if let MetaEvent::templateModule(inner) = e {
Some(inner)
} else {
None
}
})
.last()
.unwrap(),
// RawEvent is defined and imported in the template.rs file.
// val1 and val2 are things that you want to assert against.
RawEvent::Created(val1, val2),
);
Indeed you can also omit the first map or do it in more compact ways, but I have done it like this so you can see it step by step.
Print the System::events(), this also helps.
I now got it from the response of kianenigma :)
I wanted to reuse the given data in the event:
let lastEvent = System::events()
// this gives you an EventRecord { event: ..., ...}
.into_iter()
// map into the inner `event`.
.map(|r| r.event)
// the inner event is like `OuterEvent::mdouleEvent(EventEnum)`. The name of the outer
// event comes from whatever you have placed in your `delc_event! {}` in test mocks.
.filter_map(|e| {
if let TestEvent::pid(inner) = e {
Some(inner)
} else {
None
}
})
.last()
.unwrap();
if let RawEvent::Created(newHash, initiatedAccount) = lastEvent {
// there are the values :D
}
this can maybe be written better but this helps me :)

How to safely select across channels where some may get concurrently closed?

While answering a question I attempted to implement a setup where the main thread joins the efforts of the CommonPool to execute a number of independent tasks in parallel (this is how java.util.streams operates).
I create as many actors as there are CommonPool threads, plus a channel for the main thread. The actors use rendezvous channels:
val resultChannel = Channel<Double>(UNLIMITED)
val poolComputeChannels = (1..commonPool().parallelism).map {
actor<Task>(CommonPool) {
for (task in channel) {
task.execute().also { resultChannel.send(it) }
}
}
}
val mainComputeChannel = Channel<Task>()
val allComputeChannels = poolComputeChannels + mainComputeChannel
This allows me to distribute the load by using a select expression to find an idle actor for each task:
select {
allComputeChannels.forEach { chan ->
chan.onSend(task) {}
}
}
So I send all the tasks and close the channels:
launch(CommonPool) {
jobs.forEach { task ->
select {
allComputeChannels.forEach { chan ->
chan.onSend(task) {}
}
}
}
allComputeChannels.forEach { it.close() }
}
Now I have to write the code for the main thread. Here I decided to serve both the mainComputeChannel, executing the tasks submitted to the main thread, and the resultChannel, accumulating the individual results into the final sum:
return runBlocking {
var completedCount = 0
var sum = 0.0
while (completedCount < NUM_TASKS) {
select<Unit> {
mainComputeChannel.onReceive { task ->
task.execute().also { resultChannel.send(it) }
}
resultChannel.onReceive { result ->
sum += result
completedCount++
}
}
}
resultChannel.close()
sum
}
This gives rise to the situation where mainComputeChannel may be closed from a CommonPool thread, but the resultChannel still needs serving. If the channel is closed, onReceive will throw an exception and onReceiveOrNull will immediately select with null. Neither option is acceptable. I didn't find a way to avoid registering the mainComputeChannel if it's closed, either. If I use if (!mainComputeChannel.isClosedForReceive), it will not be atomic with the registration call.
This leads me to my question: what would be a good idiom to select over channels where some may get closed by another thread while others are still live?
The kotlinx.coroutines library is currently missing a primitive to make it convenient. The outstanding proposal is to add receiveOrClose function and onReceiveOrClosed clause for select that would make writing code like this possible.
However, you will still have to manually track the fact that your mainComputeChannel was closed and stop selecting on it when it was. So, using a proposed onReceiveOrClosed clause you'll write something like this:
// outside of loop
var mainComputeChannelClosed = false
// inside loop
select<Unit> {
if (!mainComputeChannelClosed) {
mainComputeChannel.onReceiveOrClosed {
if (it.isClosed) mainComputeChannelClosed = true
else { /* do something with it */ }
}
}
// more clauses
}
See https://github.com/Kotlin/kotlinx.coroutines/issues/330 for details.
There are no proposals on the table to further simplify this kind of pattern.

Resources