29th July 2010, 09:06 am
The zipper is an efficient and elegant data structure for purely functional editing of tree-like data structures, first published by Gérard Huet.
Zippers maintain a location of focus in a tree and support navigation operations (up, down, left, right) and editing (replace current focus).
The original zipper type and operations are customized for a single type, but it’s not hard to see how to adapt to other tree-like types, and hence to regular data types.
There have been many follow-up papers to The Zipper, including a polytypic version in the paper Type-indexed data types.
All of the zipper adaptations and generalizations I’ve seen so far maintain the original navigation interface.
In this post, I propose an alternative interface that appears to significantly simplify matters.
There are only two navigation functions instead of four, and each of the two is specified and implemented via a fairly simple one-liner.
I haven’t used this new zipper formulation in an application yet, so I do not know whether some usefulness has been lost in simplifying the interface.
The code in this blog post is taken from the Haskell library functor-combo and completes the Holey
type class introduced in Differentiation of higher-order types.
Edits:
- 2010-07-29: Removed some stray
Just
applications in up
definitions. (Thanks, illissius.)
- 2010-07-29: Augmented my complicated definition of
tweak2
with a much simpler version from Sjoerd Visscher.
- 2010-07-29: Replaced
fmap (first (:ds'))
with (fmap.first) (:ds')
in down
definitions. (Thanks, Sjoerd.)
Continue reading ‘Another angle on zippers’ »
The zipper is an efficient and elegant data structure for purely functional editing of tree-like data structures, first published by Gérard Huet. Zippers maintain a location of focus in a...
28th July 2010, 06:45 pm
A “one-hole context” is a data structure with one piece missing.
Conor McBride pointed out that the derivative of a regular type is its type of one-hole contexts.
When a data structure is assembled out of common functor combinators, a corresponding type of one-hole contexts can be derived mechanically by rules that mirror the standard derivative rules learned in beginning differential calculus.
I’ve been playing with functor combinators lately.
I was delighted to find that the data-structure derivatives can be expressed directly using the standard functor combinators and type families.
The code in this blog post is taken from the Haskell library functor-combo.
See also the Haskell Wikibooks page on zippers, especially the section called “Differentiation of data types”.
I mean this post not as new research, but rather as a tidy, concrete presentation of some of Conor’s delightful insight.
Continue reading ‘Differentiation of higher-order types’ »
A “one-hole context” is a data structure with one piece missing. Conor McBride pointed out that the derivative of a regular type is its type of one-hole contexts. When a...
26th July 2010, 04:14 pm
In Non-strict memoization, I sketched out a means of memoizing non-strict functions.
I gave the essential insight but did not show the details of how a nonstrict memoization library comes together.
In this new post, I give details, which are a bit delicate, in terms of the implementation described in Elegant memoization with higher-order types.
Near the end, I run into some trouble with regular data types, which I don’t know how to resolve cleanly and efficiently.
Edits:
- 2010-09-10: Fixed minor typos.
Continue reading ‘Details for non-strict memoization, part 1’ »
In Non-strict memoization, I sketched out a means of memoizing non-strict functions. I gave the essential insight but did not show the details of how a nonstrict memoization library comes...
21st July 2010, 07:41 am
Memoization incrementally converts functions into data structures. It pays off when a function is repeatedly applied to the same arguments and applying the function is more expensive than accessing the corresponding data structure.
In lazy functional memoization, the conversion from function to data structure happens all at once from a denotational perspective, and incrementally from an operational perspective. See Elegant memoization with functional memo tries and Elegant memoization with higher-order types.
As Ralf Hinze presented in Generalizing Generalized Tries, trie-based memoization follows from three simple isomorphisms involving functions types:
1 → a ≅ a
(a + b) → c ≅ (a → c) × (b → c)
(a × b) → c ≅ a → (b → c)
which correspond to the familiar laws of exponents
a ^ 1 = a
ca + b = ca × cb
ca × b = (cb)a
When applied as a transformation from left to right, each law simplifies the domain part of a function type. Repeated application of the rules then eliminate all function types or reduce them to functions of atomic types. These atomic domains are eliminated as well by additional mappings, such as between a natural number and a list of bits (as in patricia trees). Algebraic data types corresponding to sums of products and so are eliminated by the sum and product rules. Recursive algebraic data types (lists, trees, etc) give rise to correspondingly recursive trie types.
So, with a few simple and familiar rules, we can memoize functions over an infinite variety of common types. Have we missed any?
Yes. What about functions over functions?
Edits:
- 2010-07-22: Made the memoization example polymorphic and switched from pairs to lists. The old example accidentally coincided with a specialized version of
trie
itself. - 2011-02-27: updated some notation
Continue reading ‘Memoizing higher-order functions’ »
Memoization incrementally converts functions into data structures. It pays off when a function is repeatedly applied to the same arguments and applying the function is more expensive than accessing the...
20th July 2010, 08:48 pm
A while back, I got interested in functional memoization, especially after seeing some code from Spencer Janssen using the essential idea of Ralf Hinze’s paper Generalizing Generalized Tries.
The blog post Elegant memoization with functional memo tries describes a library, MemoTrie, based on both of these sources, and using associated data types.
I would have rather used associated type synonyms and standard types, but I couldn’t see how to get the details to work out.
Recently, while playing with functor combinators, I realized that they might work for memoization, which they do quite nicely.
This blog post shows how functor combinators lead to an even more elegant formulation of functional memoization.
The code is available as part of the functor-combo package.
The techniques in this post are not so much new as they are ones that have recently been sinking in for me.
See Generalizing Generalized Tries, as well as Generic programming with fixed points for mutually recursive datatypes.
Edits:
- 2011-01-28: Fixed small typo: “b^^a^^” ⟼ “ba“
- 2010-09-10: Corrected
Const
definition to use newtype
instead of data
.
- 2010-09-10: Added missing
Unit
type definition (as Const ()
).
Continue reading ‘Elegant memoization with higher-order types’ »
A while back, I got interested in functional memoization, especially after seeing some code from Spencer Janssen using the essential idea of Ralf Hinze’s paper Generalizing Generalized Tries. The blog...
13th July 2010, 06:46 pm
I’ve written a few posts about functional memoization.
In one of them, Luke Palmer commented that the memoization methods are correct only for strict functions, which I had not noticed before.
In this note, I correct this flaw, extending correct memoization to non-strict functions as well.
The semantic notion of least upper bound (which can be built of unambiguous choice) plays a crucial role.
Edits:
- 2010-07-13: Fixed the non-strict memoization example to use an argument of
undefined
(⊥) as intended.
- 2010-07-23: Changed spelling from “nonstrict” to the much more popular “non-strict”.
- 2011-02-16: Fixed minor typo. (“constraint on result” → “constraint on the result type”)
Continue reading ‘Non-strict memoization’ »
I’ve written a few posts about functional memoization. In one of them, Luke Palmer commented that the memoization methods are correct only for strict functions, which I had not noticed...