An introduction to functional programming

by Chip Camden
Contributing Writer, [GAS]

Way back in the dark ages when I first started programming in BASIC, the first stumblingblock I encountered on my quest for programmerhood was the statement:

X = X + 1

My High School algebra training screamed out, “That’s an unsolvable equation!”  I soon learned that the above is not an equation at all, but rather a command.  It’s not a statement of fact, it’s an instruction to alter a fact.  This command begins with an implied “LET” (in some BASIC versions, “LET” is even required) and the correct interpretation of the command relies not only on the previous value of X, but also on the fact that the right side is evaluated before the assignment occurs.

I may have had good reason to be uneasy about statements like this one, which change the state of a program when executed.  They’re the bread and butter of imperative programming, but there is an alternative:  functional programming, or FP for short.  Functional programming seeks to eliminate the need for program state by expressing all computations in the form of functions that take arguments and return values, without introducing side effects.  Those functions read much more like functions in mathematics, where “X = X + 1” could never work.  In fact, functional programming is based on the lambda calculus, which is a mathematical system for investigating functions.

Thinking functionally, “X + 1” is really syntactic sugar for an “add” function that takes two arguments, X and 1.  The reassignment of the result to X, however, creates a situation where program state is dependent on how many times we’ve executed this piece of code.  In truly functional programming, that reassignment would not be allowed.  Instead, the result could be assigned to a new variable, and the scope of that assignment would be clearly delineated.  For instance, in Haskell:

let y = x + 1
in dosomethingwith y

The value of x + 1 (which is shorthand for “add x 1”) is assigned to a variable y that is defined for the statement “dosomethingwith y” (assuming here that the function “dosomethingwith” is defined elsewhere).  Note that outside the “in” clause, “y” is undefined.  If you need the value of y outside this statement, then it must be returned.

What do these restrictions buy you?  A little thing called referential transparency, for one.  For any set of arguments to a given function, that function will always return the same value.  The function isn’t modifying some persistent state that will make it behave differently, depending.  The advantage is that you can employ the function in any context and rely on its results.  It not only makes understanding the code easier, but it also facilitates moving things around without breaking them.

As you might expect, avoiding program state changes makes looping difficult.  Functional languages tend to use recursion instead.  For instance, to count from 1 to 10 in Lisp:

(defun count (start end)
  (cond ((< start end) (cons start (count (+ 1 start) end)))
        (t (list start)))
)

(count 1 10)

The function “count” defines counting from start to end as a list composed of start followed by counting from start+1 to end — unless start is not less than end, in which case it’s just a list composed of start.  This function recurses until start and end are equal, then concatenates the list in reverse order as it unwinds (We could have included a case to handle if start is greater than end, but we’re keeping the example simple here.  As it stands, that case would simply return a list containing only start).  In case you’re not used to Lisp, let’s translate that logic into JavaScript:


function count(start, end) {
  if (start < end)
    return start + ' ' + count(start+1, end);
return start.toString();
}

alert(count(1,10));

Using this approach instead of a “for” loop means never having to worry if someone reuses the variable “i”.  On the other hand, languages that don’t optimize recursion well can quickly degrade in performance when this technique is overused.

Another feature of functional languages is the ability to pass functions as arguments to functions, and call those function arguments as needed (known as “lazy evaluation”).  In many languages, whenever you encounter a function reference, the function is evaluated and the result is returned in its place before anything else happens.  But in a language that supports higher-order functions and lazy evaluation, the function is not evaluated until the function to which it is being passed decides to do so.  This is most useful for passing callbacks to handle special cases and events.  In many languages, you have the option of doing either one.  For instance, in JavaScript:

function doit() { alert("ya!"); return null;}

function test(arg) {
  alert("in test!");
  if (arg != null)
    arg();
}

test(doit());    // Eager evaluation
test(doit);     // Lazy evaluation

The first call to test executes doit beforehand, while the second call doesn’t execute doit until the third line of the test function.  Thus, the order of alerts in the code above is “ya!”, “in test!”, then “in test!”, “ya!”.  The parentheses on the function argument (or not) indicates whether to invoke the function immediately or simply pass a reference to it.

Lisp is the first language to be based on the lambda calculus, and it was first specified in 1958.  It boggles the mind to realize how much more advanced Lisp was (and still is) than its contemporaries, especially back in the days of Fortran and COBOL.  Programming languages ever since have learned from Lisp, especially as functional languages have become more popular.

But Lisp is not a purely functional language.  It supports multiple programming paradigms, and it allows you to create side-effects, like modifying program variables during the execution of a function.  A couple of pure functional languages are gaining in popularity: Haskell and Erlang.  Most other languages that support functional programming are actually multi-paradigmatic, providing imperative and object-oriented features as well.  These include (but are by no means limited to) Scheme and Clojure (more functional Lisp dialects), OCaml, F# (pretty much OCaml for .NET), Ruby, Python, JavaScript, and Perl.   Even C# and Java have added some functional support, notably lambdas.

It’s also possible to use a functional style in some languages that don’t specifically provide support for FP.  For instance, even in C you can compose an application entirely from functions, avoid creating side-effects within them, and pass functions as arguments with lazy evaluation (using function pointers).

There are many other features generally associated with functional programming that we don’t have space to cover here — for instance, lexical closures and continuations.  While not strictly required to qualify a language as supporting FP, they certainly make the functional programming paradigm more powerful.

How does functional programming relate to object-oriented programming?  OOP concentrates on modeling the actors in an application, and using methods to describe the functions that each one can perform.  FP starts instead from the overall process, and includes objects only as needed.  Sometimes one paradigm works better, sometimes the other.  For instance, objects are useful when describing complex systems with many parts, like an ERP application.  Mathematical computations and document composition, on the other hand, are processes that often really don’t need any discussion of the actors involved.  The reason so many so-called functional languages actually support both paradigms is because both are useful in different situations, and often in combination.

This post is part 5 in a series on the history of programming languages.  Here are links to parts 1 through 4:

  1. The Early History of Programming Languages
  2. An introduction to object oriented languages
  3. The ascent of scripting languages
  4. Scripting Languages and the Web

(edited for code indentation)