# Eff

Eff has many superficial similarities with Haskell. this is no surprise because there is a precise connection between algebras and monads. the main advantage of Eff over Haskell is supposed to be the ease with which computational effects can be combined

## theory

### algebras

an algebra < A , f1 , ... , fk > is a set A together with maps fi : a1, ... , ai -> A for i = 1, ... , k; called "the operations". the number i is "the arity of operation fi". ai ∈ A

the quintessential example of an algebra is a group, which is a set G together with three operations

```  u : 1   → A
i : A^1 → A
m : A^2 → A  ```

of arities 0, 1, and 2, respectively. these are of course the unit, the inverse, and the multiplication, and for G to really be a group they have to satisfy the usual group axioms

Ok, given an n-ary operation f : A^n → A and elements t_1, ... , t_n ∈ A, we can form the element f(t_1, ... , t_n) ∈ A

we shall need operations that take parameters. an example of a parametrized operation is scalar multiplication `m : R * V -> V ` which takes a scalar and a vector and returns a vector. the arity of m is 1, but m also takes an additional parameter from R. in general a parametrized operation on a set A is a map

`f : P * A^N -> A`

where
P is the set of parameters,
N is the arity and
A is the carrier

the pair (P , N) determinates the type of the operation f

a signature sigma is a list of operation names together with their types:

` f_1 : (P_1 , N_1) , ... , f_k : (P_k , N_k) `

for example, the signature for a group is

`u : (1 , 0) ,   i : (1 , 1) ,   m : (1 , 2) `

a sigma-algebra is an algebra which has operations prescribed by sigma-signature

### homomorphisms

lets look at the category which has as objects the sigma-algebras and as morphisms maps which preserve the operations(aka homomorphisms)

the free sigma-algebra FSigma(X) generated by a set X is defined inductively by the rules:

1. for every x ∈ X, return (x) ∈ FSigma(X)
2. if f : (P , N)
p ∈ P
k : N -> FSigma(X)
then the symbolic expression f (p , k) ∈ FSigma(X)

the reason for writing return(x) instead of just x is that it should remind you of return from Haskell's monads

the elements of FSigma(X) are best thought of as well-founded trees whose leaves are labeled with elements of X and whose branching types are the operations from the sigma

given a sigma-algebra A and a map `r : X -> A` there is a unique homomorphism `h : FSigma(X) -> A` such that,

``` ∀ x ∈ X                                                `h(return(x)) = r(x)`
∀ f : (P , N) ∈ Sigma, p ∈ P, and k : N -> FSigma(X)   `h(f(p , k)) = f_A(p , h ° k)````

here f_A : P * A^N -> A is the operation on A corresponding to f

you may view these two equations as a recursive definition of h

### two examples from CS

#### mutable store

i.e., a program computing a value of type T with access to one mutable memory cell of type S

the content of the cell can be read with operation lookup and written to with operation update

the situation is modeled with an algebra whose carrier is T^S

the operations lookup : T^S^S -> T^S , update : S * T^S -> T^S are defined as

``` lookup (k) = (\s . k s) s ```

``` update (t, k) = (\s . k t) ```

thus lookup accepts no parameters (in other words, the parameter space is the singleton 1 ,aka ⊤ , aka unit ) and has arity 0, while update accepts a parameter of type S and has arity 1

these two operations satisfy various equations, for example

``` update (t, update (u, k)) = update (u, k) ```

``` lookup (\s . update (s, k)) = k ```

the first one says that writing t then u to the memory cell is the same as writing just u. the second one says that writing the value that has just been looked up is the same as doing nothing

you may try writing down all the relevant equations for lookup and update before you look them up in papers of Gordon Plotkin and John Power. observe that lookup and update are polymorphic in the type T of the result. this means that we can use them anywhere in the program, no matter what happens afterwards. many operations are like that, but not all

two points of confusion arise:

1. what is k in lookup(k)?
answer: it is the rest of the program, it is what happens after lookup, it is the continuation
2. does update (t, update (u, k)) mean that we update with t and then with u, or the other way around?
answer: update with t happens before update with u because the argument of an operation is the stuff that happens after the operation

#### exceptions

to make things simple and familiar, we shall consider just one exception called fail which takes no parameters

In Haskell such a thing is modeled with the Maybe monad, so let us use Haskell-like notation. the type of fail is

`fail : unit -> Maybe T`

and its action is

` fail() = Nothing `

this is a bit mysterious but if we write the type of the operation out in full,

fail : 1 * (1 + T)^0 -> 1 + T

we see that fail takes no parameters, is a nullary operation, and that a (possibly) aborting program returns either Nothing or a result Just x

how do we model exception handlers?

Gordon Plotkin and Matija Pretnar had the important idea that a piece of code with unhandled fail amounts to an element of the free algebra for the signature `fail : (1,0)`, while handled fail corresponds to some other algebra for the same signature. the unique homomorphism from the free algebra to the other one is the handling construct.

to see this, consider the following piece of (pseudo)code:

``````
handle:
a = 1
b = 2
fail ()
return (a + b)
with:
operation fail () : return (42)
``````

the code between `handle` and `with` is an element of the free algebra, namely the element `fail()`

ok, you are wondering where the rest went, but consider this: either the program raises fail, in which case it is equivalent to `fail()`, or it returns a result x, in which case it is equivalent to `return(x)`

or to put it in a different way: we can delete all the stuff that comes after fail because it will never happen, while things before it have no effect on the end result either (provided they are free of computational effects).

the `with` part tells us that `fail()` should be interpreted as 42. but this is precisely what an algebra for the signature fail : (1,0) is - a set with a chosen element, in our case the set of integers with the chosen element 42

### handlers

all of what we have just said about exceptions generalizes to arbitrary signatures and handlers: every algebraic computational effect has an associated notion of handlers. in Eff you can handle (intercept, catch) not only exceptions, but also input, output, access to memory, etc

let us look at the theoretical background for this idea.

let Sigma be the signature f_1 : (P_1 , N_1), ..., f_k : (P_k , N_k)

an element t of the free algebra F_Sigma(X) is a well-founded tree whose nodes are the operations f_i and the leaves are elements of type X. we think of t as a piece of inert source code of type X with unhandled operations f_1, ..., f_k

in order to give t a computational interpretation we need to explain how the operations f_i are interpreted. we might write something like this:

``````
handle:
t
with:
operation f_1 (p , k) : t_1 (p , k)
operation f_2 (p , k) : t_2 (p , k)
...
return (x) : r (x)
``````

this means that in t the operation f_i(p,k) is interpreted as t_i(p,k), and that return(x) is interpreted as r(x)

the above handler is precisely the unique homomorphism h : F_Sigma(T) -> A into the sigma-algebra A whose operations are t_1, ..., t_n, and such that h(return(x)) = r(x)

a handler may handle only some operations and leave others alone

let Sigma = (f,g) be a signature with two operations f and g, and we do not bother to write down their types

let t ∈ F_(f,g)(X) be a term of type X with unhandled operations f and g

the handler

``````
handle:
t
with:
operation f (p , k) : u (p , k)
return (x) : r (x)
``````

only handles f and leaves g alone

how should we interpret this? well, we could say that both operations are handled, except that g is handled by itself:

``````
handle:
t
with:
operation f (p , k) : u (p , k)
operation g (p , k) : g (p , k)
return (x) : r (x)
``````

now it is clear that the handler corresponds to the unique homomorphism h : F_(f,g)(X) → F_(g)(Y) such that h(f(p,k)) = u(p, h ˙ k), h(g(p,k)) = g(p, h ˙ k) and h(return(x)) = r(x)

### generic effects and sequencing

we have seen that an operation accepts a parameter and a continuation. we cannot expect programmers to write down explicit continuations all the time, so we switch to an equivalent but friendlier syntax known as generic effects and sequencing (or "do notation" in Haskell)

an operation f (p , \x . c), applied to parameter p and continuation \x . c is written with generic effects and sequencing as

( x ← f_gen (p) ) c

read this as: "first perform generic effect f_gen applied to parameter p, let x be the return value, and then do c"

if f is an operation of type (P,N)
then f_gen is a map f_gen : P → F_Sigma(N)
defined by f_gen (p) = f (p , \x . return(x))

sequencing is a special kind of handler

if t ∈ F_Sigma(X) and u : X → F_Sigma(Y)
then sequencing ( x ← t ) u (x)
denotes the element h(t) ∈ F_Sigma(Y)
where h : F_Sigma(X) → F_Sigma(Y) is the unique homomorphism satisfying h(return(x)) = u(x) and,
for every operation f_i ∈ Sigma, h(f_i(p,k)) = f_i(p,h ˙ k)

what these equations amount to is that sequencing passes the operations through and binds return values

a minor complication ensues when we try to write down handlers for generic effects. a handler needs access to the continuation, but the generic effect does not carry it around explicitly. Eff uses the `yield` keyword for access to the continuation

### introducing Eff

to compile Eff you need Ocaml 3.11 or newer, ocamlbuild, and Menhir (which are both likely to be bundled with Ocaml)

put the source in a suitable directory and compile it with `make` to create the Ocaml bytecode executable `eff.byte`. when you run it you get an interactive shell without line editing capabilities. if you never make any typos that should be fine, otherwise use one of the line editing wrappers, such as rlwrap or ledit

#### syntax

Eff has syntax with mandatory indentation. TABs are not allowed in indentation, only SPACEs

we have basic integer arithmetic with integers of unbounded size, booleans, strings, together with the basic operations:

```#> (1379610 + 9) * 80618151420468743021 ;;
111222333444555666777888999
#> 1 = 2 ;;
False
#> if 1 < 2
then  "one is less" ^ " than two"
else  "you must be kidding" ;;
"one is less than two"
```

we have tuples, lists and records, all of which can be decomposed with pattern matching:

```#> let (_, a, b) = (3, 4, 5) ;;
#> (a, b, a + b) ;;
(4, 5, 9)
#> let x = 1 :: [2; 3; 4; 5] @ [6; 7; 8] ;;
[1; 2; 3; 4; 5; 6; 7; 8]
#> let (a, x, _) = ("banana", 4, ["some", "stuff"]) ;;
#> (a, x) ;;
("banana",  4)
#> type foo = { a:int ; b:string } ;;
#> let y = {a = 5 ; b = "bar"} ;;
```

we have lambda-abstraction :

```#> (function x -> (x, x + 1)) 5 ;;
(5, 6)
```

for more examples, look at the file `prelude.eff`, which is loaded into eff before anything else happens

to create a new reference instance `x` with initial value 5:

```#> let x = ref 5 ;;
#> let g = function z ->
let a = z#lookup ()
in  z#update (a + 3) ;
z#update (z#lookup() + z#lookup()) ;
z#lookup() ;;
val g : int ref ⟶  int = <fun>
#> g x ;;
- : int = 16
```

Eff has a builting effect `io` with operations `std#print` and `std#read` which print to standard output and read from standard input

### how to define effects

we can define our own effects with the `effect` statement:

``````
type e = effect
operation op1 x : <signature>
operation op2 x : <signature>
...
end
``````

the above code defines an effect `e` with operations op1, op2, ... the `return` clause tells us how to handle (pure) values. the `finally` clause tells us what should be done with the value, returned from the `with` statement that uses the effect `e`. in other words, it defines a wrapper which tells us how to “run” the effect as well as how to “get out” of it (compare to Haskell’s runState for the state monad)

if you leave out the `return` or `finally` clauses it is assumed that they are identity functions

#### user-defined references

a program which uses a reference of type S and returns a value of type T is in fact a map S -> T

so

```# let myref = ref 5;;
val myref : int ref = <instance #6>
# myref;;
- : int ref = <instance #6>
# myref#lookup;;
- : unit ⟶  int = <fun>
# myref#lookup ();;
- : int = 5
# myref#update;;
- : int ⟶  unit = <fun>
# myref#update 67;;
- : unit = ()
# myref#lookup ();;
- : int = 67
#
```

we can create any number of local references. we can store them in a list, and they will work correctly as long as they do not escape the scope of their declaration

#### handlers

when we define an effect we tell how its operations are handled by default

#### exceptions

Eff does not have builtin exceptions

## practice

taken from

```let x = 10 + 10
let y = x * 3
let double x = 2 * x

(* functions can be recursive *)
let rec fact n =
if n = 0 then 1 else n * fact (n - 1)

(* or even mutually recursive *)
let rec even n = match n with
| 0 -> true
| x -> odd (x-1)
and odd n = match n with
| 0 -> false
| x -> even (x-1)

(* you can declare your own types *)
type shape =
| Circle of float
| Rectangle of float * float

(* you can use pattern matching to define functions on such types *)
let area shape =
match shape with
| Circle r -> 3.14159 *. r *. r
| Rectangle (w, h) -> w *. h

(* you can write the above using the "function" shortcut *)
let perimeter = function
| Circle r -> 2.0 *. 3.14159 *. r
| Rectangle (w, h) -> 2.0 *. (w +. h) ;;

(* Operations in Eff are actually composed of two parts:
an instance
an operation symbol

for example, we print out messages by calling the operation   "std#print".
here, "print" is the operation symbol that signifies the action,
and "std" stands for the standard channel

where Eff really differs from OCaml is that you can handle such calls

for example, the program inside the handler would print "A", "B", "C" and "D",
but the handler overrides it just as an exception handler would override an exception
thus, the program should output just: "I see you tried to print A. Not so fast!"
*)

handle
std#print "A" ;
std#print "B" ;
std#print "C" ;
std#print "D"
with
std#print msg k ->
std#print ("I see you tried to print " ^ msg ^ ". Not so fast!\n") ;;

(* the second parameter "k" in the handling clause stands for the continuation,
i.e. the part of the program that is waiting for the result of print

the difference between exception handlers and general effect handlers is
that we may call this continuation

for example, instead of stopping after the first print like above,
we may handle it and then resume the continuation by passing it "()",
the unit result expected from the call of "std#print"
*)

handle
std#print "A" ;
std#print "B" ;
std#print "C" ;
std#print "D"
with
| std#print msg k ->
std#print ("I see you tried to print " ^ msg ^ ". Okay, you may.\n") ;
k () ;;

(* we may create a reference to count the number of std#print calls,
and print out only the first three

note that references are again accessed in the standard form instance#operation
*)

let count = ref 0 ;;

handle
std#print "A" ;
std#print "B" ;
std#print "C" ;
std#print "D"
with
| std#print m k ->
match count#lookup () with
| 0 ->
std#print ("I see you tried to print " ^ m ^ ". Sure!\n") ;
count#update 1 ;
k ()
| 1 ->
std#print ("I see you tried to print " ^ m ^ ". Slow down!\n") ;
count#update 2 ;
k ()
| _ ->
std#print ("I see you tried to print " ^ m ^ ". That's it!\n") ;;

(* another example *)

let collect = handler
| val x -> (x, "")
| std#print m k -> let (r , ms) = k () in (r , m ^ ms) ;;

with collect handle
std#print "A" ;
std#print "B" ;
std#print "C" ;
std#print "D" ;;

val collect : α =>  α × string = <handler>
- : unit × string = ((), "ABCD")

type choice = effect
operation fail : unit -> empty
operation decide : unit -> bool
end

let c = new choice

let fail () = match c#fail () with
let choose_left = handler c#decide () k -> k true
let choose_max = handler c#decide () k -> max (k true) (k false)
let choose_all = handler val x -> [x]
| c#fail () _ -> []
| c#decide () k -> (k true) @ (k false) ;;

(*  example 1
try also "choose_max" and "choose_all"        *)

with choose_left handle
let x = (if c#decide () then 1 else 2) in
let y = (if c#decide () then 0 else 5) in
x - y ;;

(*  then ... else ...     are continuations here  *)

val c : choice = <instance #6>
val fail : unit ⟶  α = <fun>
val choose_left : α ⟹  α = <handler>
val choose_max : α ⟹  α = <handler>
val choose_all : α ⟹  α list = <handler>
- : int = 1

(*   example 2  *)

let rec choose_int m n =
if m > n
then fail ()
else if c#decide ()
then m
else choose_int (m + 1) n

let sqrt m =
let rec try n =
let n2 = n ** 2 in
if n2 > m
then None
else if n2 = m then Some n else try (n + 1)
in try 0

let pythagorean m n =
let a = choose_int m (n - 1) in
let b = choose_int a n in
match sqrt (a ** 2 + b ** 2) with
| None -> fail ()
| Some c -> (a, b, c)

let backtrack = handler
| c#decide () k ->
handle k false with c#fail () _ -> k true ;;

with choose_all handle pythagorean 5 7 ;;
with backtrack handle pythagorean 5 15 ;;
with choose_all handle pythagorean 3 4 ;;
with choose_all handle pythagorean 5 15 ;;

val c : choice = <instance #6>
val fail : unit ⟶  α = <fun>
val choose_left : α ⟹  α = <handler>
val choose_max : α ⟹  α = <handler>
val choose_all : α ⟹  α list = <handler>
val choose_int : int ⟶  α ⟶  int = <fun>
val sqrt : α ⟶  int option = <fun>
val pythagorean : int ⟶  int ⟶  int × int × int = <fun>
val backtrack : α ⟹  α = <handler>
- : (int × int × int) list = []
- : int × int × int = (9, 12, 15)
- : (int × int × int) list = [(3, 4, 5)]
- : (int × int × int) list = [(5, 12, 13); (6, 8, 10); (8, 15, 17);
(9, 12, 15)]

```