Simplifying semantics with type class morphisms
When I first started playing with functional reactivity in Fran and its predecessors, I didn’t realize that much of the functionality of events and reactive behaviors could be packaged via standard type classes.
Then Conor McBride & Ross Paterson introduced us to applicative functors, and I remembered using that pattern to reduce all of the lifting operators in Fran to just two, which correspond to pure
and (<*>)
in the Applicative
class.
So, in working on a new library for functional reactive programming (FRP), I thought I’d modernize the interface to use standard type classes as much as possible.
While spelling out a precise (denotational) semantics for the FRP instances of these classes, I noticed a lovely recurring pattern:
The meaning of each method corresponds to the same method for the meaning.
In this post, I’ll give some examples of this principle and muse a bit over its usefulness. For more details, see the paper Simply efficient functional reactivity. Another post will start exploring type class morphisms and type composition, and ask questions I’m wondering about.
Behaviors
The meaning of a (reactive) behavior is a function from time:
type B a = Time -> a
at :: Behavior a -> B a
So the semantic function, at
, maps from the Behavior
type (for use in FRP programs) to the B
type (for understanding FRP programs)
As a simple example, the meaning of the behavior time
is the identity function:
at time == id
Functor
Given b :: Behavior a
and a function f :: a -> b
, we can apply f
to the value of b
at every moment in (infinite and continuous) time.
This operation corresponds to the Functor
method fmap
, so
instance Functor Behavior where ...
The informal description of fmap
on behavior translates to a formal definition of its semantics:
fmap f b `at` t == f (b `at` t)
Equivalently,
at (fmap f b) == t -> f (b `at` t)
== f . ( t -> b `at` t)
== f . at b
Now here’s the fun part.
While Behavior
is a functor, so is its meaning:
instance Functor ((->) t) where fmap = (.)
So, replacing f . at b
with fmap f (at b)
above,
at (fmap f b) == fmap f (at b)
which can also be written
at . fmap f == fmap f . at
Keep in mind here than the fmap
on the left is on behaviors, and on the right is functions (of time).
This last equation can also be written as a simple square commutative diagram and is sometimes expressed by saying that at
is a “natural transformation” or “morphism on functors” [Categories for the Working Mathematician].
For consistency with similar properties on other type classes, I suggest “functor morphism” as a synonym for natural transformation.
The Haskell wiki page on natural transformations shows the commutative diagram and gives maybeToList
as another example.
Applicative functor
The fmap
method applies a static (not time-varying) function to a dynamic (time-varying) argument.
A more general operation applies a dynamic function to a dynamic argument.
Also useful is promoting a static value to a dynamic one.
These two operations correspond to (<*>)
and pure
for applicative functors:
infixl 4 <*>
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a->b) -> f a -> f b
where, e.g., f == Behavior
.
From these two methods, all of the n-ary lifting functions follow. For instance,
liftA3 :: Applicative f =>
( a -> b -> c -> d)
-> f a -> f b -> f c -> f d
liftA3 h fa fb fc = pure h <*> fa <*> fb <*> fc
Or use fmap h fa
in place of pure h <*> fa
.
For prettier code, (<$>)
(left infix) is synonymous with fmap
.
Now, what about semantics?
Applying a dynamic function fb
to a dynamic argument xb
gives a dynamic result, whose value at time t
is the value of fb
at t
, applied to the value of xb
at t
.
at (fb <*> xb) == t -> (fb `at` t) (xb `at` t)
The (<*>)
operator is the heart of FRP’s concurrency model, which is determinate, synchronous, and continuous.
Promoting a static value yields a constant behavior:
at (pure a) == t -> a
== const a
As with Functor
, let’s look at the Applicative
instance of functions (the meaning of behaviors):
instance Applicative ((->) t) where
pure a = const a
hf <*> xf = t -> (hf t) (xf t)
Wow — these two definitions look a lot like the meanings given above for pure
and (<*>)
on behaviors.
And sure enough, we can use the function instance to simplify these semantic definitions:
at (pure a) == pure a
at (fb <*> xb) == at fb <*> at xb
Thus the semantic function distributes over the Applicative
methods.
In other words, the meaning of each method is the method on the meaning.
I don’t know of any standard term (like “natural transformation”) for this relationship between at
and pure
/(<*>)
.
I suggest calling at
an “applicative functor morphism”.
Monad
Monad morphisms are a bit trickier, due to the types.
There are two equivalent forms of the definition of a monad morphism, depending on whether you use join
or (>>=)
.
In the join
form (e.g., in Comprehending Monads, section 6), for monads m
and n
, the function nu :: forall a. m a -> n a
is a monad morphism if
nu . join == join . nu . fmap nu
where
join :: Monad m => m (m a) -> m a
For behavior semantics, m == Behavior
, n == B == (->) Time
, and nu == at
.
Then at
is also a monad morphism if
at (return a) == return a
at (join bb) == join (at (fmap at bb))
And, since for functions f
,
fmap h f == h . f
join f == t -> f t t
the second condition is
at (join bb) == join (at (fmap at bb))
== t -> at (at . bb) t t
== t -> at (at bb t) t
== t -> (bb `at` t) `at` t
So sampling join bb
at t
means sampling bb
at t
to get a behavior b
, which is also sampled at t
.
That’s exactly what I’d guess join
to mean on behaviors.
Note: the FRP implementation described in Simply efficient functional reactivity does not include a Monad
instance for Behavior
, because I don’t see how to implement one with the hybrid data-/demand-driven Behavior
implementation.
However, the closely related but less expressive type, Reactive
, has the same semantic model as Behavior
. Reactive
does have a Monad instance, and its semantic function (rats
) is a monad morphism.
Other examples
The Simply paper contains several more examples of type class morphisms:
- Reactive values, time functions, and future values are also morphisms on
Functor
,Applicative
, andMonad
. - Improving values are morphisms on
Ord
.
The paper also includes a significant non-example, namely events.
The semantics I gave for Event a
is a time-ordered list of time/value pairs. However, the semantic function (occs
) is not a Monoid
morphism, because
occs (e `mappend` e') == occs e `merge` occs e'
and merge
is not (++)
, which is mappend
on lists.
Why care about type class morphisms?
I want my library’s users to think of behaviors and future values as being their semantic models (functions of time and time/value pairs). Why? Because these denotational models are simple and precise and have simple and useful formal properties. Those properties allow library users to program with confidence, and allow library providers to make radical changes in representation and implementation (even from demand-driven to data-driven) without breaking client programs.
When I think of a behavior as a function of time, I’d like it to act like a function of time, hence Functor
, Applicative
, and Monad
.
And if it does implement any classes in common with functions, then it had better agree the function instances of those classes.
Otherwise, user expectations will be mistaken, and the illusion is broken.
I’d love to hear about other examples of type class morphisms, particularly for Applicative
and Monad
, as well as thoughts on their usefulness.
Paul Johnson:
What do you mean by “meaning”? I noticed the meaning = definition pattern in your paper, but since I’ve never come across this use of the word “meaning” I can’t see how its anything other than a tautology.
Thanks,
Paul.
9 April 2008, 12:24 amChristophe Poucet (vincenz):
Great article, I really liked it. Especially after the clarification. The subtler points are definitely food for thought, thanks for the conversation
10 April 2008, 9:22 amconal:
Hi Paul, Thanks for the question. Please let me know if the following explanation helps.
In FRP,
Behavior
is a parameterized abstract data typetype Behavior a
This type can be represented in various ways (e.g., see Functional Implementations of Continuous Modeled Animation). In this post, however, I don’t address the representation at all, leaving
Behavior
abstract. Instead, I talk about the meaning of behaviors, in the sense of denotational semantics. (In case this notion of meaning is not clear yet, consider the relationship between the numeral “3” and the number 3. We conventionally interpret numerals as numbers, but numerals are symbols, while numbers are quantities.) What’s perhaps unusual here is that I’m giving the meaning of a data type, not a language. (Though languages can be viewed of data types in a meta-language.)A behavior represents (can be interpreted as) a value that varies in time. The semantic domain
B
and semantic function,at :: Behavior a -> B a
, make that interpretation formal. For instance, in the meaning of the behaviortime
note that
10 April 2008, 9:26 amtime :: Behavior Time
, whileid :: B Time
, i.e.,id :: Time -> Time
.Raoul Duke:
Re: Paul’s question.
(I’m a complete hick when it comes to most mathematics, let alone this stuff, so apologies if I’m being really painfully clueless.)
Conal, I interpret what you say at the start of your post to mean that Behaviour is used in an actual program e.g. Haskell code that can be compiled and run, whereas B is not about Haskell but about meaning/semantics/’purely’ math, but you happen to use Haskell notation to talk about it.
??
10 April 2008, 5:03 pmconal:
Raoul: yes, that’s exactly what I mean. Thanks.
10 April 2008, 5:09 pmKefer:
@ Paul, Raoul:
“Meaning” is what you want in your mind when working with this stuff, “representation” is the irrelevant nuts-and-bolts that get in the way.
Instead of the meaning of the number “3”, you could get sidetracked for e.g. with the fontography of the symbol, its various representational history and evolution, etc.
Conal is building a new set of tools (i.e. a library). Every code library is essentially an abstraction layer. How to go about reasoning with the tools, using them effectively, is what he’s trying to get at with “meaning”, as opposed to the innards of the abstraction layer, the “representation.”
10 April 2008, 8:03 pmConal Elliott » Blog Archive » Elegant memoization with functional memo tries:
[…] algebraic construction of memo tries, using type class morphisms, […]
15 October 2008, 6:17 pmConal Elliott » Blog Archive » Another lovely example of type class morphisms:
[…] Peeking out from behind Max’s definitions is a lovely pattern I’ve been noticing more and more over the last couple of years, namely type class morphisms. […]
13 November 2008, 10:20 pmConal Elliott » Blog Archive » More beautiful fold zipping:
[…] class morphisms, placed some standard structure around Max Rabkin’s Beautiful folding, using type class morphisms to confirm that the Functor and Applicative instances agreed with their inevitable […]
14 November 2008, 11:24 pmConal Elliott » Blog Archive » Proofs for left fold zipping:
[…] morphism? (See More beautiful fold zipping for definitions of these functions and classes, and see Simplifying semantics with type class morphisms for the notion of type class […]
15 November 2008, 1:30 pmConal Elliott » Blog Archive » Enhancing a Zip:
[…] Simplifying semantics with type class morphisms […]
15 November 2008, 5:11 pmConal Elliott » Blog Archive » Composing memo tries:
[…] The post Elegant memoization with functional memo tries showed a simple type of search tries and their use for functional memoization of functions. This post provides some composition tools for memo tries, whose definitions are inevitable, in that they are determined by the principle presented in Simplifying semantics with type class morphisms. […]
2 December 2008, 10:26 amConal Elliott » Blog Archive » Sequences, segments, and signals:
[…] have to agree with the model’s instances. I’ve described this latter principle in Simplifying semantics with type class morphisms and some other […]
6 December 2008, 8:27 amsclv:
OK this is probably blindingly obvious but it just hit me that there’s obviously some sort of deep connection between type class morphisms and free theorems — whether this can tell us more about free theorems, or whether this means that we can use free theorems to derive type class morphisms, or whether this is just a trivial statement that they both involve reasoning with types isn’t quite clear to me yet.
5 January 2009, 8:57 pmConal Elliott » Blog Archive » Simpler, more efficient, functional linear maps:
[…] other words, the meaning of a liftA2 is the liftA2 of the meanings, as discussed in Simplifying semantics with type class morphisms. Tags: linear map, math, memoization, trie, type family | Comment (RSS) […]
23 January 2009, 10:11 pmJames Deikun:
This works better than you thought:
merge isn’t mappend on lists … but it is mappend on ordered lists!
16 August 2010, 7:07 pmconal:
James: only if all the occurrences in the first event precede all the occurrences in the second, right?
17 August 2010, 9:13 pmJames Deikun:
No; I’m not talking about regular lists that happen to be ordered, but about values belonging to a type of list that can only be ordered, not disordered. You can express ADTs that act like this in Haskell, but the natural and beautiful expression of the type requires dependent types, probably to a greater degree than Haskell can fake.
Pseudocode:
OrderedList is a monoid with merge as mappend. And (at least in its proof-irrelevant refinement form) it’s obviously the type you actually want to be using to define the denotation of events.
19 August 2010, 4:09 pm