Functional Programming Concepts

Brevity – Shorter Time to Market

  • Typically functional programs are on the order of  2 to 10 times shorter than imperative representation of equivalent logic constructs. This means greater productivity and shorter time to market.
  • How “brief exactly is this brevity” ?
  • We will see a good example of this in the functional implementation of the visitor pattern.



  • Immutability – the quality of being unable to change [Wikipedia]. Functional programming favours immutability.
  • With the ultimate purpose of every program being to effect some kind of change, the essence of this principle is better stated as side-effect-freedomEffectively, the aim is to keep the boiler plate constructs of a program free of state while explicitly managing  intended state changes that are part of the business logic.
  • Transient states tend to be kept on the stack – read recursion instead of iteration. Also, instead of iterating over a container or collection to apply a transformation, collections tend to provide mapping and folding primitives that facilitate applying a transformation function to each element in the collection and combining elements in a systematic  way.
  • Some functional languages introduce special constructs to ring-fence state changes – see Haskell and Monads.  For C++ programmers, this concept represents “const correctness” on steroids.  For Java programmers it means, everything is final !
  • Purely functional languages enforce immutability across the entire application !
  •  Recursion tends to be formulated through optimized tail calls. This means that instead of traversing the stack in reverse to collect and assemble results,  the final recursive call (tail call) marks the exit from the recursion. This allows discarding the parent stack frame at each level of recursion, thus avoiding growing and ultimately “blowing” the stack. Consider the simple C++ example below:


  • Functional languages optimize tail recursion to not grow the stack and to not repeatedly incur function calling overhead. For example, Scala on the JVM optimizes tail recursion byte code from method invoke statements to jump statements.
  • Collection or container classes in functional languages provide a host of mapping and folding mechanism to assist client code in not having to iterate over data structures.  See Scala examples below.  For more complex parsing of data structures, a mechanism called pattern matching is used.


Consider the below example – an “imperative style” foo.  It may be described thusly:

  1. Obtain a memory location
  2. Successively write the result
  3. of each operation to this location
  4. Dereference location


The result of course is 10.


A functional foo… in the OCaml REPL (Read Eval Print Loop).

  • Written in terms of the emergent properties(read intended changes), not in terms of the types they are stored in, nor in terms of the locations they are stored at; boilerplate code entirely absent from user input


  • All intended changes declared !
  • No room for unintended changes !
  • Solution is said to be referentially transparent!
  • A direct consequence is tractable correctness:  test n, n+1 & edge cases (here overflow) and be assured of correct code !
  • A functional compiler can reason about the semantics of the code, not merely the syntax.
  • As a consequence, wide ranging optimisations can be applied, including theoretically parallelising sequential paths of execution.

A Functional  Birthday Cake Recipe:

The emergent behaviour of this solution is transparent!


  • Let  batter = 400g flour + 2 eggs + 200ml milk
  • Let dough =  knead batter for 10 minutes
  • Let cake = bake dough for 30 minutes at 180o
  • Let iced_cake = cake + sugar coating
  • Let birthday_cake = iced_cake + candles

Imperative Birthday Cake Recipe:

Emergent behaviour of the 2nd solution is opaque!


  • BakingForm container
  • Container.Add (400g flour)
  • Container.Add(2 eggs)
  • Container.Add(200 ml milk)
  • Container.Knead(10 minutes)
  • Container.Bake(30 minutes,180o)
  • Container.AddCoatingToContents(sugar)
  • Container.Add(Candles)

Questions :

  • When does the dough emerge ?
  • When does the cake emerge ?
  • Does a cake emerge ?
  • With unintended state change are allowable, can another chef (read thread of execution) interfere with the container state during our operation and produce side-effects? Everything is written in terms of the baking form.

Dupont make baking forms. They would like this recipe. Nobody else would find it natural ! Yet, this is how we program today. Our logic is driven by the container: CPU & registers yielded Macro Assembler. Assembler yielded C. C yielded C++ & Java.


Side effect freedom effectively eliminates shared memory state locking problems that plague parallel programs devised in imperative languages. As of recent, this is the key driver propelling interest in functional languages as we transition from traditional Von Neuman architectures to massively parallel multicore architectures.  Read only data structures require no locks.


Read Only – Lock Free


The concurrency story doesn’t stop here. If your application is inherently single threaded and sequential the compiler can still optimize functional programs to run on multiple CPUs. Take a look at the following code fragment:

String s1 = somewhatLongOperation1();
String s2 = somewhatLongOperation2();
String s3 = concatenate(s1, s2);

In a functional language the compiler could analyse the code, classify the functions that create strings s1 and s2 as potentially time consuming operations, and run them concurrently. This is impossible to do in an imperative language because each function may modify state outside of its scope and the function following it may depend on it. In functional languages automatic analysis of functions and finding good candidates for concurrent execution is as trivial as automatic inlining!

  • Fine, but there are a lot of allocations happening !
  • Most functional languages are garbage collected !
  • What is the cost of this high level approach ?
  • How does this scale ?

Black Scholes GPU & Haskell

University of New South Wales, Sydney Australia GPU Research:


  • Haskell performance tightly hugs C CUDA curve.
  • For 20 million call options we get GPU Haskell “coming in” at about 10ms on 960 core GPU compared to >10,000ms for a single CPU.
  1. CUDA -> Subset of C for GPU architecture
  2. Marginally higher level than Macro Assembler !
  3. Highly optimized, required code to be written to the hardware !

Lazy Evaluation

C++ programmers among us will recognize the concept of lazy evaluation.


In the above example, b will not be evaluated if a is false.  This is one of the few cases where C++  employs lazy evaluation.  Functional languages tend to employ this kind of optimization across the entirety of an application because the compiler can ‘reason’ about the code semantics. Haskell, for example, is lazy by default.  Nothing is evaluated until necessary (and then only ever once for identical inputs if the developer employs memoization).

Consider the following Haskell example:

qsort []     = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)

Minimum ls = head (quickSort ls)

Which first sorts the list and then takes the first element of the list. However, because of lazy evaluation, only the head gets computed. For example, if we take the minimum of the list [2, 1, 3,] quickSort will first filter out all the elements that are smaller than two. Then it does quickSort on that (returning the singleton list [1]) which is already enough. Because of lazy evaluation, the rest is never sorted, saving the remainder of computational time.

The real benefit, however, of laziness is not merely that it makes things more efficient, but that it allows us to write simple elegant code that would not be practical in an eager evaluation environment.. This brings us to the topic of infinite data structures.

An abstract example – food for thought…

Consider a tic-tac-toe program. This has four functions:

  • A next-move function that takes a current board and generates a list of new boards each with one move applied
  • Then there is a “move tree” function which applies the next-move function to derive all the possible board positions that could follow from the current board arrangement.
  • There is a minimax function that walks the tree (or possibly only part of it) to find the best next move (minimizing cost in terms of moves while maximizing gain)
  • There is a board-evaluation function that determines if one of the players has won.

This creates a nice clear separation of concerns. In particular the next-move function and the board evaluation functions are the only ones that need to understand the rules of the game: the move tree and minimax functions are completely reusable.

Now lets try implementing chess instead of tic-tac-toe. In an “eager” (i.e. conventional) language this won’t work because the move tree won’t fit in memory. So now the board evaluation and move generation functions need to be mixed in with the move tree and minimax logic because the minimax logic has to be used to decide which moves to generate. Our nice clean modular structure disappears.

However in a lazy language the elements of the move tree are only generated in response to demands from the minimax function: the entire move tree does not need to be generated before we let minimax loose on the top element. So our clean modular structure still works in a real game.

Next, a simpler but more concrete example…

In C++ (naïve solution):


In Scala:


Program Output:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, empty()
Process finished with exit code 0


Dynamic programming is an optimization method to problems that exhibit the properties of overlapping subproblems which are only slightly smaller[1] and optimal substructure . Examples include combinatorial optimization, e.g knapsack problem maximizing resource allocation with financial constraints, combinatorics, complexity theory and applied mathematics.Example: fibonacci sequence (naïve implementation):Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times:

  1. fib(5)
  2. fib(4) + fib(3)
  3. (fib(3) + fib(2)) + (fib(2) + fib(1))
  4. ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
  5. (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))

In particular, fib(2) was calculated three times from scratch.

In larger examples, many more values of fib, or sub problems, are recalculated, leading to an exponential time algorithm. Dynamic programming is a discipline to reformulate such problems in a bottom up fashion to avoid such recalculation.


How can memoization help ? Given referential transparency in functional programming,  there is no need to calculate a given value twice for the same parameters or more generally the same lexical scope ( closures ).


The above example adds memoization to our Fibonacci calculator. Note that our version utilizes a bottom up approach to calculating the sequence (prominent dynamic programming approach). This it does not suffer the problem of repeatedly recalculating fib(2).

Our example constrains itself to adding memoization to the top level function.

Note the use of the weak hash map. This map ‘leaks’ as memory tightens.

Therefore cashing function return values will not lead to out of memory errors.

The key point to take away is that a reformulation of a naïve solution to a bottom up dynamic programming discipline approach isn’t strictly necessary when memoization is available coupled with referential transparency. This brings tremendous optimization benefits in combinatorics, complexity theory & algorithmic problems.

Pattern Matching & Classic Expression Problem

Multiple dispatch construct of Functional Programming to mimic polymorphism in OO.  Gives rise to DSL and raising abstraction to levels above objects and functions (Abstract Syntax Trees, Algebraic Data Types, Production Rules). Logical equivalent of visitor pattern in OO.

The visitor pattern is a common mechanism to address a shortcoming in both Object Oriented paradigms (e.g. C++, Java)  and procedural paradigms (top down decomposition, e.g. C ) in flexibly extending the ‘data-type x operation’ space, which is referred to as the Expression Problem.

Visitor Pattern to Pattern Matching

Source : Wikipedia


Composite Pattern


The magic, dispatch of visitor to all elements in composite


The magic, overloaded functions add second level of dispatch. This gives the visitor pattern the name “double dispatch.”

The outcome: adding arbitrary methods to objects without physically amening each class with the relevant method.

Program Output:

Kicking my front left wheel
Kicking my front right wheel
Kicking my back left wheel
Kicking my back right wheel
Moving my body
Starting my engine
Starting my car

Scala Pattern Matching


Program Output:

Moving my body
Starting my engine
Kicking myfront leftwheel
Kicking myfront rightwheel
Kicking myback leftwheel
Kicking myback rightwheel

Correctness : If I forget the Wheels clause I get the following compilation error :


  • 30%  code size compared to Java solution
  • Parser for Abstract Syntax
  • Tree ? … DSL
  • No mutable data
  • Exhaustiveness Check
  • Bespoke Constructs…towards DSLs
  • We can easily roll our own arbitrary control constructs…

Simple Unicode DSLs


Or define mathematical expressions using their Unicode representation…



A Mini “SQL” DSL



Lunar Lander DSL – BASIC


“All up”  approx. 300 lines of code, including examples & comments…


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s