An Erlangling’s Blog

September 20, 2007

Introducing Plists: An Erlang module for doing list operations in parallel

Filed under: plists — wingedsubmariner @ 4:37 pm

plists is a drop-in replacement for the Erlang module lists, making most list operations parallel. It can operate on each element in parallel, for IO-bound operations, on sublists in parallel, for taking advantage of multi-core machines with CPU-bound operations, and across erlang nodes, for parallizing inside a cluster. It handles errors and node failures. It can be configured, tuned, and tweaked to get optimal performance while minimizing overhead.

Almost all the functions are identical to equivalent functions in lists, returning exactly the same result, and having both a form with an identical syntax that operates on each element in parallel and a form which takes an optional “malt”, a specification for how to parallize the operation.

fold is the one exception, parallel fold is different from linear fold. This module also include a simple mapreduce implementation, and the function runmany. All the other functions are implemented with runmany, which is as a generalization of parallel list operations.

plists makes it easy to parallize list operations. Suppose you have a list of Erlang nodes, and want to know which are actually up:

NodeList = [apple@laptop, orange@desktop],

UpNodes = lists:filter(fun (Node) ->

 	     % net_adm:ping returns pong if it can connect

 	     % to a node, pang otherwise

 	     net_adm:ping(Node) =:= pong

      end, NodeList),

This works. It goes over every element in the node list, and remove those that don’t respond. However, it checks only one node at a time, and even if a node is up it could take a while to respond. If a node is down, you sit waiting for a timeout, you sit waiting on every single down node in its turn.

UpNodes2 = plists:filter(fun (Node) ->

 	     % net_adm:ping returns pong if it can connect

 	     % to a node, pang otherwise

 	     net_adm:ping(Node) =:= pong

      end, NodeList),

Almost exactly the same, except for two differences. We are storing the result in UpNodes2, and we used plists:filter instead of lists:filter. plists:filter spawns a process for every single node we want to check, and takes no longer than the longest response, or the timeout if there is a downed node.

This is the easiest way to use plists. Simple replace your call to a function in lists with a call to the corresponding method in plists. plists operates on each element in parallel.

For more advanced use invoke the plists method with an optional parameter, the malt. The malt describes how to parallize the list operation. Suppose we have a huge list of numbers and we need to find the square root and the factors of each one. Suppose we also have a dual-core machine we want to use to full effect.

HugeList = lists:seq(1, 100000),

SquareRoots = plists:map(fun math:sqrt/1, HugeList, {processes, 2}),

Factors = plists:map(fun find_factors/1, HugeList, 10),

The last parameter on the plists:map call is the malt. For computing square roots, we used a malt of {procesess, 2}. This malt caused the huge list to be split into two sublists, each of which was processed by its own process, allowing both cores to be used for finding sqrts at once. This would not have been a good way to find the factors however, because the time it takes to compute factors depends on the size of the number. Bigger numbers take longer to factor, and if one processes gets small numbers and the other large numbers, the one with the smaller numbers would finish first, leaving the processes with bigger numbers to keep working on one core. Our list is sorted, so we can be certain that if we split it in half that would be the case. Instead we use the malt 10, which splits the huge list into 10-element sublists, each processed by its own process. If it turns out factoring uses large amount of memory, and so we don’t want to processes each sublist at once, we could use the malt [10, {processes, 2}]. The huge list is split into 10-element sublists, but plist is careful to run only two processes at once. If we wanted to compute with an erlang cluster we could use the malt [10, {nodes, [{apple@laptop, 2}, {orange@desktop, 3}]}]. This would run five processes at once, two on apple and three on orange.

Just because we started many parallel processes doesn’t mean we have to wait for them all to finish. If any error occurs, plists prematurely terminates all processes and returns, a feature we can [ab]use to find the Node in our NodeList we have the smallest ping time to.

NodeList = [apple@laptop, orange@desktop],

try plists:foreach(fun (Node) ->

 	     % net_adm:ping returns pong if it can connect

 	     % to a node, pang otherwise

 	     case net_adm:ping(Node) of

 		 pong ->

 		     exit({closest, Node});

 		 pang ->



      end, NodeList) of

    ok ->


catch exit:{closest, Closest} ->



% Closest is now bound to the closest node.

This pings all nodes in parallel, but as soon as one responds it terminates and returns with exit. We catch the exit and recover the closest node.

This introduction has only covered the most basic features plists offers. Complete documentation is available at, and the project page with downloads and source repository on

Plists shows off not only Erlang’s powerful concurrency, but also how ready functional programming is for concurrent programming. Because operations are already abstracted, it is easy to change how they operate without changing their meaning. Changing Erlang code to work with plists is very easy, optimising malts is a small extra step. Writing plists itself was easy. But parallelizing even a single loop in C would cause enormous code expansion, and there would still be no mechanism to allow all loops to be optimized. Higher-order functions encourage code reuse of code solely devoted to code flow, an absolutely necessary feature for building real concurrent software.



  1. […] yet, and it’s far too late in the evening to start now, so for the full low-down have a read here and then move on to the docs here. On top of all the nice looking functionality, you’ll soon […]

    Pingback by Ciaran’s Random Writings » Blog Archive » Parallel List Operations in Erlang — September 20, 2007 @ 6:09 pm

  2. This looks great – looking forward to giving it a test drive.

    Comment by ciarang — September 20, 2007 @ 6:11 pm

  3. This is brilliant. Really clean code too. Thank you!

    Comment by thomas lackner — September 21, 2007 @ 12:50 pm

  4. “We catch the exit and recover the closest node.”

    The closest node that returns pong before we catch the exit, no?

    “This pings all nodes in parallel, but as soon as one responds it terminates and returns with exit.”

    But not necessarily in the same time, exact? I mean, what happens if the list of nodes to ping is really huge? We should catch exit before having tested all the other nodes, one of them should have been possibly closer than the closest we retrieve, exact? That should be particularly true if the ping timeout is shorter than the time to ping all the nodes in the list. Am I wrong?

    Comment by Daniel — September 21, 2007 @ 10:41 pm

  5. Daniel:

    Yes, the first node to respond with a pong will trigger the exit, and will be the node returned.

    Yes, we don’t start pinging all the nodes at exactly the same time, and for a huge list this delay could be significant. However in an environment where you care about distances to nodes the nodes are probably quite distant and the ping times will be considerably greater than the time it takes to spawn all the pinging processes, so we only lose a little resolution.

    If you really care about being exact, instead use plists:map to map each Node to {Node, Time_to_ping}, and then find the element with the smallest Time_to_ping.

    Comment by wingedsubmariner — September 22, 2007 @ 12:20 am

  6. very interesting, but I don’t agree with you

    Comment by Idetrorce — December 15, 2007 @ 11:24 am

  7. I would like to see a continuation of the topic

    Comment by Maximus — December 20, 2007 @ 6:09 am

  8. […] using erldis as a component in parallel/distributed information retrieval (in conjunction with plists), and for accessing data shared with python / django apps. It’s a fully compliant erlang […]

    Pingback by erldis – an Erlang Redis Client « — January 5, 2010 @ 12:48 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: