Can I build an Event based on a Behavior change in reactive-banana? - haskell

The goal here is to build an Event that triggers whenever a Behavior changes under some observation.
To make it a bit more concrete, let's say I have:
a bFoo :: Behavior a, which I like to think it as a time-varied state.
a function diffA :: a -> a -> Maybe b to tell whether certain part of value of bFoo has been changed (together with a bit more info indicating what has changed, or leaving me some room for avoiding boolean blindness)
and now I want to build an Event eChanged :: Event b that triggers if and only if we have a Just _ coming out from diffA fooBefore fooAfter.
Here comes the problem: it seems the framework only allows you to look at one Behavior value at the Moment. So I don't have access two values to compare against each other. I've been looking at Reactive.Banana.Combinators but still doesn't have much idea about whether it's possible.
Few more bits and pieces in case those are helpful or spark any new ideas:
Event-based rather than Behavior-based?
The first thing I can think of is simply to allow myself to have access to more details:
bFoo is accumB-ed from an Event eUpdate, but eUpdate doesn't necessarily update anything. So yes, we do have access to eFoo :: Event a if we want to, but I want to avoid using it as I think it as an implementation detail of bFoo.
A Failed Attempt
My current attempt looks like:
networkDesc :: MomemtIO ()
networkDesc = do
...
foo <- valueB bFoo
let eChanged = filterJust $ fmap (diffA foo) eUpdate
...
This doesn't work as foo is not changed over time.
Last resort
My last resort is to reactimate on eFoo, do the comparison outside of the framework and feed result back into the network - sounds doable but that defeats the purpose of having this network to take care of all the logic.

Related

Is it a good or a bad thing that a suite of quickcheck tests match the implementations?

I'm trying to get started with Haskell's QuickCheck, and while I am familiar with the concepts behind the testing methodology, this is the first time I am trying to put it to use on a project that goes beyond testing stuff like reverse . reverse == id and that kind of thing. I want to know if it is useful to apply it to business logic (I think it very much could be).
So a couple of existing business logic type functions that I would like to test look like the following:
shouldDiscountProduct :: User -> Product -> Bool
shouldDiscountProduct user product =
if M.isNothing (userDiscountCode user)
then False
else if (productDiscount product) then True
else False
For this function I can write a QuickCheck spec like the following:
data ShouldDiscountProductParams
= ShouldDiscountProductParams User Product
instance Show ShouldDiscountProductParams where
show (ShouldDiscountProductParams u p) =
"ShouldDiscountProductParams:\n\n" <>
"- " <> show u <> "\n\n" <>
"- " <> show p
instance Arbitrary ShouldDiscountProductParams where
arbitrary = ShouldDiscountProductParams <$> arbitrary <*> arbitrary
shouldDiscountProduct :: Spec
shouldDiscountProduct = it behavior (property verify)
where
behavior =
"when product elegible for discount\n"
<> " and user has discount code"
verify (ShouldDiscountProductParams p t) =
subject p t `shouldBe` expectation p t
subject =
SUT.shouldDiscountProduct
expectation User{..} Product{..} =
case (userDiscountCode, productDiscount) of
(Just _, Just _) -> True
_ -> False
And what I end up with is a function expectation that verifies the current implementation of shouldDiscountProduct, just more elegantly. So now I have a test, I can refactor my original function. But my natural inclination would be to change it to the implementation in expectation:
shouldDiscountProduct User{..} Product{..} =
case (userDiscountCode, productDiscount) of
(Just _, Just _) -> True
_ -> False
But this is fine right? If I want to change this function again in future I have the same function ready to verify my changes are appropriate and not inadvertently breaking something.
Or is this overkill / double bookkeeping? I suppose I have had ingrained into me from OOP testing that you should try and avoid mirroring the implementation details as much as possible, this literally couldn't be any further than that, it is the implementation!
I then think as I go through my project and add these kinds of tests, I am effectively going to be addding these tests, and then refactoring to the cleaner implementation I implement in the expectation assertion. Obviously this isn't going to be the case for more complex functions than these, but in the round I think will be the case.
What are people experiences with using property based testing for business logic-type functions? Are there any good resources out there for this kind of thing? I guess I just want to verify that I am using QC in an appropriate way, and its just my OOP past throwing doubts in my mind about this...
I'm sorry to jump in a few months later, but as this question easily pops on Google I think it needs a better answer.
Ivan's answer is about unit tests while you are talking about property tests, so let's disregard it.
Dfeuer tells you when it's acceptable to mirror the implementation, but not what to do for your use case.
It's a common mistake with Property based tests (PBT) to rewrite the implementation code at first. But this is not what PBT are for. They exist to check properties of your function. Hey, don't worry, we all do this mistake the first few times we write PBT :D
A type of property you could check here is whether your function response is consistent with its input:
if SUT.shouldDiscountProduct p t
then isJust (userDiscountCode p) && isJust (productDiscount t)
else isNothing (userDiscountCode p) || isNothing (productDiscount t)
This one is subtle in your particular use case, but pay attention, we reversed the logic. Your test checks the input, and based on this, asserts on the output. My test checks on the output, and based on this, asserts on the input. In other use cases this could be much less symmetric. Most of the code can also be refactored, I let you this exercise ;)
But you may find other types of properties! E.g. invariance properties:
SUT.shouldDiscountProduct p{userDiscountCode = Nothing} t == False
SUT.shouldDiscountProduct p{productDiscount = Nothing} t == False
See what we did here? We fixed one part of the input (e.g. the user discount code is always empty) and we assert that no matter how everything else varies, the output is invariant (always false). Same goes for product discount.
One last example: you could use an analogous property to check your old code and your new code behave exactly the same:
shouldDiscountProduct user product =
if M.isNothing (userDiscountCode user)
then False
else if (productDiscount product) then True
else False
shouldDiscountProduct' user product
| Just _ <- userDiscountCode user
, Just _ <- productDiscount product
= True
| otherwise = False
SUT.shouldDiscountProduct p t = SUT.shouldDiscountProduct' p t
Which reads as "No matter the input, the rewritten function must always return the same value as the old function". This is so cool when refactoring!
I hope this helps you grasp the idea behind Property based tests: stop worrying so much about the value returned by your function, and start wondering about some behaviors your function has.
Note, PBT are not an enemy of unit tests, they actually fit well together. You could use 1 or 2 unit tests if it makes you feel safer about actual values, then write Property test(s) to assert your function has some behaviors, no matter the input.
Basically the only times it makes sense for property checking to compare two implementations of the same function are when:
Both function are part of the API, and they should each implement a certain function. For example, we generally want liftEq (==) = (==). So we should test that liftEq for the type we're defining satisfies this property.
One implementation is obviously correct, but inefficient, while another is efficient but not obviously correct. In this case, the test suite should define the obviously correct version and check the efficient version against it.
For typical "business logic", neither of these apply. There might, however, be some special cases where they do. For example, you could have two different functions you call under different circumstances that are supposed to agree under certain conditions.
No, it's not a good thing because you're effectively comparing the results of code with results of the same code.
To resolve this chicken-and-egg problem, tests are built on these principles:
Tests feed predefined inputs and check for predefined outputs. Nothing "random". All sources of randomness are considered additional inputs and mocked or otherwise forced to produce specific values.
Sometimes, a compromise is possible: you leave a random source alone and check the output not for exact value but just for "correctness" (e.g. that it has a specific format). But then you're not testing the logic that is responsible for the parts that you don't check (though you may not need to, see below).
The only way to test a function completely is to exhastively try all possible inputs
Since this is almost always impossible, only a few "representative" ones are selected
And an assumption about the code is made that it handles all other possible inputs the same way
This is why test coverage metric is important: it will tell you when a code has changed in such a way that this assumption no longer holds
To select the optimal "representative" input, follow the function's interface.
If there are some ranges in input data that trigger different behavior, edge values are usually the most useful
Outputs are checked against the interface's promises
Sometimes, the interface doesn't promise a specific value for given inputs, variations are considered implementation details. Then you test not for a specific values but only what the interface guarantees.
Testing implementation details is only useful if other components rely on them -- then they are not really implementation details but parts of a separate, private interface.

What is a Sample in Helm?

There doesn't seem to be much documentation for Sample a in the Haskell FRP library Helm. I am trying to write a function similar to sample on in Elm and I think update could help. However I am confused about how update works because, from the source code here, it seems that the variable p is not used at all.
What should this function be doing and why is the input p included if it isn't used? Is there a better way to do this? I think seq could work, but I tried implementing my animation with seq and it doesn't do the thing I am looking for.
Probably the first argument there exists for historical reasons or for consistency with other functions offered by helm; but I don't know enough about either to say for sure.
The intended use of the update function seems to be to wrap the appropriate constructor around its argument: update p a s will result in either Changed a or Unchanged a depending on whether a matches with the value stored in s. One might use this, for example, as an argument to foldp:
foldp (update undefined) :: Eq a => Sample a -> Signal a -> Signal (Sample a)
Downstream signals could then ignore Unchanged values easily.

Why does NPM's policy of duplicated dependencies work?

By default, when I use NPM to manage a package depending on foo and bar, both of which depend on corelib, by default, NPM will install corelib twice (once for foo, and once for bar). They might even be different versions.
Now, let's suppose that corelib defined some data structure (e.g. a URL object) which is passed between foo, bar and the main application. Now, what I would expect, is if there was ever a backwards incompatible change to this object (e.g. one of the field names changed), and foo depended on corelib-1.0 and bar depended on corelib-2.0, I'd be a very sad panda: bar's version of corelib-2.0 might see a data structure created by the old version of corelib-1.0 and things would not work very well.
I was really surprised to discover that this situation basically never happens (I trawled Google, Stack Overflow, etc, looking for examples of people whose applications had stopped working, but who could have fixed it by running dedupe.) So my question is, why is this the case? Is it because node.js libraries never define data structures that are shared outside of the programmers? Is it because node.js developers never break backwards compatibility of their data structures? I'd really like to know!
this situation basically never happens
Yes, my experience is indeed that that is not a problem in the Node/JS ecosystem. And I think it is, in part, thanks to the robustness principle.
Below is my view on why and how.
Primitives, the early days
I think the first and foremost reason is that the language provides a common basis for primitive types (Number, String, Bool, Null, Undefined) and some basic compound types (Object, Array, RegExp, etc...).
So if I receive a String from one of the libs' APIs I use, and pass it to another, it cannot go wrong because there is just a single String type.
This is what used to happen, and still happens to some extent to this day: Library authors try to rely on the built-ins as much as possible and only diverge when there is sufficient reason to, and with sufficient care and thought.
Not so in Haskell. Before I started using stack, I've run into the following situation quite a few times with Text and ByteString:
Couldn't match type ‘T.Text’
with ‘Text’
NB: ‘T.Text’
is defined in ‘Data.Text.Internal’ in package ‘text-1.2.2.1’
‘Text’ is defined in ‘Data.Text.Internal’ in package ‘text-1.2.2.0’
Expected type: String -> Text
Actual type: String -> T.Text
This is quite frustrating, because in the above example only the patch version is different. The two data types may only be different nominally, and the ADT definition and the underlying memory representation may be completely identical.
As an example, it could have been a minor bugfix to the intersperse function that warranted the release of 1.2.2.1. Which is completely irrelevant to me if all I care about, in this hypothetical example, is concatenating some Texts and comparing their lengths.
Compound types, objects
Sometimes there is sufficient reason to diverge in JS from the built in data types: Take Promises as an example. It's such a useful abstraction over async computations compared to callbacks that many APIs started using them. What now? How come we don't run into many incompatibilities when different versions of these {then(), fail(), ...} objects are being passed up, down and around the dependency tree?
I think it's thanks to the robustness principle.
Be conservative in what you send, be liberal in what you accept.
So if I am authoring a JS library which I know returns promises and takes promises as part of its API, I'll be very careful how I interact with the received objects. E.g. I won't be calling fancy .success(), .finally(), ['catch']() methods on it, since I want to be as compatible as possible with different users, with different implementations of Promises. So, very conservatively, I may just use .then(done, fail), and nothing more. At this point, it doesn't matter if the user uses the promises that my lib returns, or Bluebirds' or even if they hand-write their own, so long as those adhere to the most basic Promise 'laws' -- the most basic API contracts.
Can this still lead to breakage at runtime? Yes, it can. If even the most basic API contract is not fulfilled, you may get an exception saying "Uncaught TypeError: promise.then is not a function". I think the trick here is that library authors are explicit about what their API needs: e.g. a .then method on the supplied object. And then its up to whoever is building on top of that API to make it damn sure that that method is available on the object they pass in.
I'd like to also point out here that this is also the case for Haskell, isn't it? Should I be so foolish as to write an instance for a typeclass that still type-checks without following its laws, I'll get runtime errors, won't I?
Where do we go from here?
Having thought through all this just now, I think we might be able to have the benefits of the robustness principle even in Haskell with much less (or even no(?)) risk for runtime exceptions/errors compared to JavaScript: We just need the typesystem be granular enough so it can distinguish what we want to do with the data we manipulate, and determine if that is still safe or not. E.g. The hypothetical Text example above, I would wager is still safe. And the compiler should only complain if I'm trying to use intersperse, and asks me to qualify it. E.g. with T.intersperse so it can be sure which one I want to use.
How do we do this in practice? Do we need extra support, e.g. language extension flags from GHC? We might not.
Just recently I found bookkeeper, which is a compile-time type-checked anonymous records implementation.
Please note: The following is conjecture on my part, I haven't taken much time to try and experiment with Bookkeeper. But I intend to in my Haskell projects to see if what I write about below could really be achieved with an approach such as this.
With Bookkeeper I could define an API like so:
emptyBook & #then =: id & #fail =: const
:: Bookkeeper.Internal.Book'
'["fail" 'Data.Type.Map.:-> (a -> b -> a),
"then" 'Data.Type.Map.:-> (a1 -> a1)]
Since functions are also first-class values. And whichever API takes this Book as an argument can be very specific what it demands from it: Namely the #then function, and that it has to match a certain type signature. And it cares not for any other function that may or may not be present with whatever signature. All this checked at compile time.
Prelude Bookkeeper
> let f o = (o ?: #foo) "a" "b" in f $ emptyBook & #foo =: (++)
"ab"
Conclusion
Maybe Bookkeeper or something similar will turn out to be useful in my experiments. Maybe Backpack will rush to the rescue with its common interface definitions. Or some other solution comes along. But either way, I hope we can move towards being able to take advantage of the robustness principle. And that Haskell's dependency management can also "just work" most of the time and fail with type errors only when it is truly warranted.
Does the above make sense? Anything unclear? Does it answer your question? I'd be curious to hear.
Further possibly relevant discussion may be found in this /r/haskell reddit thread, where this topic came up just not long ago, and I thought to post this answer to both places.
If I understand well, the supposed problem might be :
Module A
exports = require("c") //v0.1
Module B
console.log(require("a"))
console.log(require("c")) //v0.2
Module C
V0.1
exports = "hello";
V0.2
exports = "world";
By copying C_0.2 in node_modules and C0.1 in node_modules/a/node_modules and creating dummy packages.json, I think I created the case you're talking about.
will B have 2 different conflicting versions of C_data ?
Short answer :
it does. So node does not handle conflicting versions.
The reason you don't see it on the internet is as gustavohenke explained that node naturally does not encourage you to pollute the global scope or chain pass structures between modules.
In other words, it's not often that you'll see a module export another module's structure.
I don't have first-hand experience with this kind of situation in a large JS program, but I would guess that it has to do with the OO style of bundling data together with the functions that act on that data into a single object. Effectively the "ABI" of an object is to pull public methods by name out of a dictionary, and then invoke them by passing the object as the first argument. (Or perhaps the dictionary contains closures that are already partially applied to the object itself; it doesn't really matter.)
In Haskell we do encapsulation at a module level. For example, take a module that defines a type T and a bunch of functions, and exports the type constructor T (but not its definition) and some of the functions. The normal way to use such a module (and the only way that the type system will permit) is to use one exported function create to create a value of type T, and another exported function consume to consume the value of type T: consume (create a b c) x y z.
If I had two different versions of the module with different definitions of T and I was able to use the create from version 1 together with the consume from version 2 then I'd likely get a crash or wrong answer. Note that this is possible even if the public API and externally observable behavior of the two versions is identical; perhaps version 2 has a different representation of T that allows for a more efficient implementation of consume. Of course, GHC's type system stops you from doing this, but there are no such safeguards in a dynamic language.
You can translate this style of programming directly into a language like JavaScript or Python:
import M
result = M.consume(M.create(a, b, c), x, y, z)
and it would have exactly the same kind of problem that you are talking about.
However, it's far more common to use the OO style:
import M
result = M.create(a, b, c).consume(x, y, z)
Note that only create is imported from the module. consume is in a sense imported from the object we got back from create. In your foo/bar/corelib example, let's say that foo (which depends on corelib-1.0) calls create and passes the result to bar (which depends on corelib-2.0) which will call consume on it. Actually, while foo needs a dependency on corelib to call create, bar does not need a dependency on corelib to call consume at all. It's only using the base language notions to invoke consume (what we could spell getattr in Python). In this situation, bar will end up invoking the version of consume from corelib-1.0 regardless of what version of corelib bar "depends on".
Of course for this to work the public API of corelib must not have changed too much between corelib-1.0 and corelib-2.0. If bar wants to use a method fancyconsume which is new in corelib-2.0 then it won't be present on an object created by corelib-1.0. Still, this situation is much better than we had in original Haskell version, where even changes that do not affect the public API at all can cause breakage. And perhaps bar depends on corelib-2.0 features for the objects it creates and consumes itself, but only uses the API of corelib-1.0 to consume objects it receives externally.
To achieve something similar in Haskell, you could use this translation. Rather than directly using the underlying implementation
data TImpl = TImpl ... -- private
create_ :: A -> B -> C -> TImpl
consume_ :: TImpl -> X -> Y -> Z -> R
...
we wrap up the consumer interface with an existential in an API package corelib-api:
module TInterface where
data T = forall a. T { impl :: a,
_consume :: a -> X -> Y -> Z -> R,
... } -- Or use a type class if preferred.
consume :: T -> X -> Y -> Z -> R
consume t = (_consume t) (impl t)
and then the implementation in a separate package corelib:
module T where
import TInterface
data TImpl = TImpl ... -- private
create_ :: A -> B -> C -> TImpl
consume_ :: TImpl -> X -> Y -> Z -> R
...
create :: A -> B -> C -> T
create a b c = T { impl = create_ a b c,
_consume = consume_ }
Now foo uses corelib-1.0 to call create, but bar only needs corelib-api to call consume. The type T lives in corelib-api, so if the public API version does not change, then foo and bar can interoperate even if bar is linked against a different version of corelib.
(I know Backpack has a lot to say about this kind of thing; I'm offering this translation as a way to explain what is happening in the OO programs, not as a style one should seriously adopt.)
Here is a question that mostly answers the same thing: https://stackoverflow.com/a/15948590/2083599
Node.js modules don't pollute the global scope, so when they're required, they'll be private to the module that required them - and this is a great functionality.
When 2 or more packages require different versions of the same lib, NPM will install them for each package, so no conflicts will ever happen.
When they don't, NPM will install only once that lib.
In the other hand, Bower, which is a package manager for the browser, does install only flat dependencies because the libs will go to the global scope, so you can't install jquery 1.x.x and 2.x.x. They'll only export the same jQuery and $ vars.
About the backwards compatibility problems:
All developers do break backwards compatibility at least once! The only difference between Node developers and developers of other platforms is that we have been teached to always use semver.
Considering that most packages out there have not reached v2.0.0 yet, I believe that they have kept the same API in the switch from v0.x.x to v1.0.0.

Why are there no functions for building Events out of non-events in reactive-banana?

I'm in the process of teaching myself FRP and Reactive-banana while writing what I hope will be a more useful tutorial for those that follow me. You can check out my progress on the tutorial here.
I'm stuck at trying to implement the simple beepy noise examples using events. I know I need to do something like this:
reactimate $ fmap (uncurry playNote) myEvent
in my NetworkDescription, but I can't figure out how to just have the network do the same thing repeatedly, or do something once. Ideally, I'm looking for things like this:
once :: a -> Event t a
repeatWithDelay :: Event t a -> Time -> Event t a
concatWithDelay :: Event t a -> Event t a -> Time -> Event t a
The Time type above is just a stand-in for whatever measurement of time we end up using. Do I need to hook up the system time as a Behavior to drive the "delay" functions? That seems more complicated than necessary.
Thanks in advance,
Echo Nolan
EDIT: Okay the types for repeatWithDelay and concatWithDelay don't make sense. Here's what I actually meant.
repeatWithDelay :: a -> Time -> Event t a
concatWithDelay :: a -> a -> Time -> Event t a
I have chosen not to include such functions in the core model for now, because time raises various challenges for consistency. For instance, if two events are scheduled to happen 5 seconds from now, should they be simultaneous? If not, which one should come first? I think the core model should be amenable to formal proof, but this does not work with actual, physical time measurements.
That said, I plan to include such functions in a "they work, but no guarantees" fashion. The main reason that I have not already done so is that there is no canonical choice for time measurement. Different applications have different needs, sometimes you want nanosecond resolution, sometimes you want to use timers from your GUI framework, and sometimes you want to synchronize to an external MIDI clock. In other words, you want the time-based function to work generically with many timer implementation, and it is only with reactive-banana-0.7.0 that I have found a nice API design for this.
Of course, it is already possible to implement your own time-based function by using timers. The Wave.hs example demonstrates how to do that. Another example is Henning Thielemann's reactive-balsa library, which implements various time-based combinators to process MIDI data in real time.

Behavior in reactive-banana

Pardon me, I'm just starting to look into reactive-banana and FRP.
The author of reactive-banana made this example per my suggestion, in which he creates a counter which can be increased and decreased. He uses accumE function which accumulates events. I think I was able to somewhat grok the Event type, and was able to test quite a few things with it, but then I remembered that there was also Behavior. I looked into it, but it seems like the behavior is meant to be used in similar situations; to modify an existing variable, just like accumE does with events.
What does Behavior mean, and what are the use cases for it?
I agree with Ankur rather than Chris: a text box is a value over time and so naturally wants to be a behavior rather than an event. The reasons Chris give for the less natural choice of event are implementation issues and so (if accurate) an unfortunate artifact of the reactive-banana implementation. I'd much rather see the implementation improved than the paradigm used unnaturally.
Besides the semantic fit, it's pragmatically very useful to choose Behavior over Event. You can then, for instance, use the Applicative operations (e.g., liftA2) to combine the time-varying text box value with other time-varying values (behaviors).
Semantically, you have
Behavior a = Time -> a
That is, a Behavior a is a value of type a that varies over time. In general, you know nothing at all about when a Behavior a would change, so it turns out to be a rather poor choice for updating a text field on the click of a button. That said, it would be easy to get a behavior that expresses the current value of the number in the counter example. Just use stepper on the event stream, or alternatively, build it from scratch the same way, except by using accumB instead of accumE.
Typically, things you hook up to input and output will always be Events, so Behavior is used internally for intermediate results.
Suppose that in the given example, you want to add a new button that remembers the current value, like the memory function on simple calculators. You would start out by adding a memory button and a text field for the remembered value:
bmem <- button f [text := "Remember"]
memory <- staticText f []
You need to be able to ask for the current value at any time, so in your network, you'd add a behavior to represent it.
let currentVal = stepper 0 counter
Then you can hook up events, and use apply to read the value of the behavior every time the Remember button is pressed, and produce an Event with that sequence of values.
emem <- event0 bmem command
let memoryE = apply (const <$> currentVal) emem
And finally, hook up this new event to the output
sink memory [text :== ("", show <$> memoryE)]
If you wanted to use memory internally, then again you'd want a Behavior for its current value too... but since we only ever use it to connect it to an output, we only need an event for now.
Does that help?
Library author speaking. :-)
Apparently, Chris Smith can read minds because he accurately describes what I am thinking. :-)
But Conal and Arthur have a point, too. Conceptually, the counter is a value that varies in time, not a sequence of event occurrences. Thus, thinking of it as a Behavior would be more appropriate.
Unfortunately, behaviors do not come with any information about when they will change, the are "poll-only". Now, I could try to implement various clever schemes that will minimize the polling and thus allow effient updates of GUI elements. (Conal does something similar in the original paper.) But I have adopted a "no magic" philosophy: the library user shall be responsible for managing updates via events himself.
The solution I currently envision is to provide a third type besides Event and Behavior, namely Reactive (name subject to change) which embodies qualities of both: conceptually, it's a value that varies in time, but it also comes with an event that notifies of changes. One possible implementation would be
type Reactive a = (a,Event a)
changes :: Reactive a -> Event a
changes (_, e) = e
value :: Reactive a -> Behavior a
value (x, e) = stepper x e
It is no surprise that this is precisely the type that sink expects. This will be included in a future version of the reactive-banana library.
EDIT: I have released reactive-banana version 0.4 which includes the new type, which is now called Discrete.
Generally, Behavior is a value that changes over a period of time. It is a continuous value, where as events are discrete values. In case of Behavior a value is always present.
For example: The text on a text box is a Behavior as the text can change over a period of time but there will be a current value, where as a keyboard stroke in a event as you cannot query a keyboard stroke for its "current" value.

Resources