paint-brush
Functional Programming is not What Makes Haskell Greatby@harporoeder
2,720 reads
2,720 reads

Functional Programming is not What Makes Haskell Great

by Harpo RoederNovember 30th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Haskell frequently appears on hackernews or /r/programming but the practicality does not come from strongly typed functional programming, it comes from the power of the language's runtime. Haskell has a full range of tooling to enable complex applications and building complex applications. Haskell is particularly powerful in killing a thread using an exception from outside of it to kill it is not required to modify the thread or function itself. Asynchronous IO multiplexing combined with a low level of parallelism is how services such as NGINX attain high performance.

Company Mentioned

Mention Thumbnail
featured image - Functional Programming is not What Makes Haskell Great
Harpo Roeder HackerNoon profile picture

This post is substantially directed to non Haskellers. Haskell frequently appears on hackernews or /r/programming but the content is commonly evangelizing some aspect of functional programming, strong types, and purity.

Haskell embodies all those things, but the practicality does not come from strongly typed functional programming, it comes from the power of the runtime. Other strongly typed functional languages such as OCaml exist, but many aspects are not nearly as mature. Here I explore some of runtime properties that greatly contribute to the Haskell's power, performance, and convenience.

Numerous Haskell implementation exist (FLRC, UHC, JHC) however we will specifically go over GHC which is by far the most used, and powerful.

Asynchronous Exceptions

In almost all languages exceptions are thrown by the executing thread. In C++ throwing an exception would look something like this:

void doAction() {
    if (1 == 0) {
        throw std::runtime_error("the impossible happened");
    }
}

try {
    doAction();
} catch (const std::exception& err) {
    cout << err.what() << std::endl;
}

In Haskell *any* thread can throw an exception to another thread. Let's spawn a thread and then immediately kill it by throwing an exception from outside of it. No modification of the thread or function itself is required.

threadId <- forkIO myLongRunningAction
throwTo threadId MyException

This capability ends up being incredibly powerful. All of a sudden, features that might normally be implemented as fundamental properties of the language can now be expressed within it.

An example of this is timeout :: Int -> IO a -> IO (Maybe a). Timeout takes a timeout in microseconds, an action to run, and may or may not return a result depending on if the action completes in time. Internally it spawns a thread with a timer, and should a deadline be hit issues an exception to the other threads computation.

In a language like Go a goroutine cannot be killed externally. A common pattern emerges where authors manually wait on channels to ensure can be controlled:

stop := make(chan bool)

go func() {
    for {
        select {
        case <- stop:
            return
        }
    }
}()

stop <- true

Should a mistake be made resources will leak, and more complicated control flows must be continually duplicated. Even killing a thread utilizing owned resources such as sockets is safe in Haskell because of constructs built on bracket that automatically cleanup resources.

Automatic detection of failure cases

Fearless concurrency is hard. Language features to make screwing up harder are in the works. While Haskell will not stop you from deadlocking your program, the runtime has tooling to detect when this happens and throw an exception such as BlockedIndefinitelyOnMVar in the deadlocked code.

Haskell is lazy so while less common, if an unproductive infinite loop is detected a NonTermination exception may be thrown. This is however not perfect.

Haskell has tail call optimization limiting required stack size however a recoverable StackOverflow can be thrown. Various arithmetic exceptions exist for numeric errors such as Overflow, or DivideByZero.

Each thread can have independently set allocation limits via setAllocationCounter which causes an AllocationLimitExceeded exception. This can be useful for handling multitenancy.

Green threads and asynchronous networking

Most languages have some implementation of green threads, but few have it as the primary mode of computation. The success of Go, and Erlang for writing networked applications is heavily tied to this model of concurrency. Languages such as Rust that implement the functionality as a library, have substantial ergonomics implications.

Standard POSIX pthreads consume substantially more resources than green threads, and utilize the operating systems scheduler. On my system:

> ulimit -a | grep stack
stack size (kbytes, -s) 8192

the default thread stack size as 8 megabytes. Technically Linux does lazy allocation via virtual memory, however even *spawning and killing* a thread can consume thousands of CPU cycles.

Scaling to a large number of concurrent users (generally known as the C10K problem), is usually accomplished via asynchronous IO operations. Asynchronous IO multiplexing combined with a low level of parallelism is how services such as NGINX attain such high performance.

GHC internally utilizes epoll, or kqueue depending on platform. And support for the new io_uring Linux API is being experimented with already, which can bring substantial performance gains.

Of green thread implementations, Haskell's is among the most powerful. For example with threadStatus :: ThreadId -> IO ThreadStatus you can inspect the status of a thread, and if blocked even see *why*:


data BlockReason
  = BlockedOnMVar
        -- ^blocked on 'MVar'
  | BlockedOnBlackHole
        -- ^blocked on a computation in progress by another thread
  | BlockedOnException
        -- ^blocked in 'throwTo'
  | BlockedOnSTM
        -- ^blocked in 'retry' in an STM transaction
  | BlockedOnForeignCall
        -- ^currently in a foreign call
  | BlockedOnOther

Performance tuning and GC

Some languages such as Java are famous for the tuning capabilities. A garbage collector cannot be perfect for every workload. There exist throughput and latency tradeoffs among others.

By default, a generational copying collector is used by the runtime. GHC has a wide variety of flags to flip, and knobs to turn. One of particular interest is --numa to enable optimizations for high core count multi CPU servers which have higher communication overhead.

Haskell has a full range of tooling to support debugging and building complex applications:

  • Code coverage reporting via HCP.
  • Time profiling flamegraphs via ghc-prof-flamegraph
  • Space profiling graphs via hp2ps

Also published on: https://harporoeder.com/posts/haskell-runtime/