Optimizing boolean expressions for speed.

Minimizing boolean expressions is of great pragmatic importance. In hardware, it is used to reduce the number of transistors in microprocessors. In software, it is used to speed up computations. If you have a complex expression you want to minimize and look up a textbook on discrete mathematics, you will usually find a list of boolean laws and identities that you are supposed to use to minimize functions.

Textbooks usually list the following boolean laws and identities:

  • They list and explain basic operators: and \wedge, or \vee, exclusive or \oplus, negation \neg, implication \rightarrow, true t, false f, etc.
  • Double negation: \neg\neg a=a
  • Identity laws: a \vee f=a, a \wedge t=a
  • Idempotent laws: a \vee a = a, a \wedge a=a
  • Dominance laws: a \vee t=t, a \wedge f=f
  • Commutative laws: a \vee b = b \vee a, a \wedge b=b \wedge a
  • Associative laws: a\vee(b\vee c)=(a \vee b)\vee c, a \wedge(b\wedge c)=(a \wedge b)\wedge c
  • Distributive laws: a\wedge(b \vee c)=(a \wedge b)\vee(a\wedge c), a\vee(b \wedge c)=(a\vee b)\wedge(a\vee c)
  • De Morgan’s laws: \neg(a \vee b)=\neg a \wedge \neg b, \neg(a \wedge b)=\neg a \vee \neg b.
  • Sub-expressions elimination. You can rewrite x=(c \wedge(a \vee b)) \oplus (d \wedge(a \vee b)) as t=a \vee b, x=(c \wedge t)\oplus(d \wedge t)
  • And other identities you will discover.

Using these laws, you are supposed to massage your expression until you discover a shorter one that computes the same truth values as the original expression. Or you can use a truth table and see what expressions are equivalent to some of the sub-expressions and how you can combine them to obtain an equivalent but shorter boolean expression. All you need is a pen, a piece of paper, coffee, and a lot of patience. While I make this sound like a tedious process, you will rapidly get the knack of this and you will find that minimizing expressions of a few variable quite easy.

Let us take an example. You may happen to write something like (I do, once in a while):

if ( current->next==0
    || (current->next!=0 && SomeThing() )
   {
    ...
   }
else ...

The conditional statement is equivalent to \neg a \vee (a \wedge b), were a is current->next!=0, and b is SomeThing(). Using the truth table approach (a favorite of textbooks), you write:

a b \neg a a \wedge b \neg a \vee (a \wedge b) \neg a \vee b
0 0 1 0 1 1
0 1 1 0 1 1
1 0 0 0 0 0
1 1 0 1 1 1

The last column you eventually find by trying numerous combinations (ok, in this case it’s not all that difficult, but you get the point) to find that \neg a \vee (a \wedge b) \equiv \neg a \vee b. So you rewrite the above code as:

if ( current->next==0 || SomeThing() )
   {
    ...
   }
else ...

The textbook approach to minimization, however, doesn’t tell the whole story: it considers that all operators, \vee, \wedge, \neg, \oplus, etc. have an equal cost; namely zero. However, in real life, evaluating a boolean expression does incur a time cost. First, each variable involved in the expression must be assigned a truth value, which may be a constant but may also be the result of a computationally expensive function. Second, the expression itself must be evaluated, and the cost of the evaluation is not null. It is proportional to the number of operators involved.

Using boolean simplification and common sub-expression expression will help a lot to reduce the computational cost of the evaluation by not evaluating costly sub-expression more than once, but we can go further. Let us note that it is not always necessary to evaluate the entire expression to ascertain its truth value. For example, the expression a \vee (b \wedge c) is true if a is true, regardless of the value of (b \wedge c), so it is safe to abort the evaluation of the expression whenever a is true. If a is false, then we proceed to the evaluation of (b \wedge c). Similarly, the expression a \wedge (b \vee c) is false whenever a is false, regardless of the value of (b \vee c). The sub-expression (b \vee c) is evaluated if, and only if, a is true.

Some (most?) programming languages makes use of this observation to speedup evaluation of the conditional expressions. In Simula, for example, the short-cut evaluation is explicitly controlled by the programmer via the logical operators and then and or else. In Turbo Pascal, compiler switches, {$b+} and {$b-}, would instruct the compiler to generate code for complete or truncated expression evaluation. C and C++ use short-cut evaluation in all cases.

All C/C++ programmers use short-cut boolean expression evaluation to conditionally enable parts of expressions. Indeed, it is not uncommon to see code that looks like

if (p && p->count)
  {
   // do stuff with p because
   // p is not null
  }
else
  {
   // do something else
  }

The first part of the expression prevents the second part p->count from being evaluated if p is null. However, I think not all programmers explicitly use this feature to speedup tests by carefully resequencing/reorganizing the conditional expressions. Consider the following piece of code:

if ( really_slow_test(with,plenty,of,parameters)
     && slower_test(with,some,parameters)
     && fast_test // with no parameters
   )
 {
  ...then code...
 }

This code first evaluates an expensive function then, on success, proceeds to evaluate the remainder of the expression. But even if the first test fails and the evaluation is short-cut, there’s a significant performance penalty because the fat really_slow_test(...) is evaluated. While retaining program correctness, one can rearrange the expression as follows:

if ( fast_test
     && slower_test(with,some,parameters)
     && (really_slow_test(with,plenty,of,parameters))
 {
  ...then code...
 }

Now, the code is much faster because fast_test costs very little compared to the other two parts of the expression, and we proceed to evaluate really_slow_test(...) only if the two previous tests succeed. Ideally, the expressions should lend themselves to such reordering. In practice, there might be dependencies in the tests that force a natural ordering, regardless of which test is expensive or not. Think of our previous example with p and p->count; clearly, inverting the expression is not going to work. In many other cases, however, one can massage the expression to simplify it and to move the easiest, fastest, most likely to fail- (&&) or succeed-fast (||) tests to the front, allowing the expression’s value to be ascertained as fast as possible while being still correct.

The use of fail- or succeed-fast testing is, I think, a good, simple pratice that does not endanger code legibility and that does not lean too far on the side of premature optimization (and all its evils), yet yields interesting performance gains whenever the conditional expressions aren’t completely trivial.

6 Responses to Optimizing boolean expressions for speed.

  1. Stephen says:

    Another thing to consider is branching cost which easily runs into 10s of cycles.

    On the assembly level if ( a && b && c ) { block } translates to

    if( a ) {
      if (b) {
        if( c) {
           block } } }
    

    If your tests are simple the penalties are proportionately larger. e.g.

    int a,b;
    if( a == 0 && b == 0 ) // two branches
    can be better written as
    if( (a | b) == 0 )  // one branch
    
  2. Steven Pigeon says:

    You’re right. Branchings ARE expensive. However, if cascades aren’t a frequent programming style, and they might not help legibility all that much.

    The second trick you present is valid when all variables are already assigned a truth value, or have unit cost of initialisation. && translates to &, || to |, and using De Morgan’s laws as you did allows to transform ((a==0)&&(b==0))==1 as ((a!=0)||(b!=0))==0 which is, indeed, (a|b)==0. I was going to talk about this in Part II, thanks for the spoilers ;)

    If you do not have exactly 0 or 1 as your variables’ values, you can still demote them to zero anyway. Suppose you have ((a==3)&&(b==c))==1, you can rewrite this as (((a-3)==0)&&(((b-c)==0))==1, which, applying De Morgan’s law again, you transform to ((a-3)|(b-c)) == 0. All low-cost arithmetic, no extra jump due to short-cut evaluation.

  3. mohan says:

    can you please provide the alternate boolean minimization techniques other than k -map tabular method

  4. Steven Pigeon says:

    There are other techniques, but they are ultimately all very complex (in the computational complexity sense). It is a NP-complete problem, see The Complexity of Boolean Functions by Ingo Wegener.

    A few video of courses on the subject:

    There are others linked into the “suggestions.”

  5. ticktock says:

    Let me describe a “brute force” technique to boolean optimization I came up. The concept is to calculate term evaluation speed and probability during runtime and adjust the order of evaluation dynamically. I’ll apply it to your example:

    if ( fast_test
    	     && slower_test(with,some,parameters)
    	     && (really_slow_test(with,plenty,of,parameters))
    	 {
    	  ...then code...
    	 }
    

    Here is the rewrite:

    // global
    OptimizeExpression exp = new OptimizeANDExpression();
    
    // condition evaluator
    while(!exp.Evaluated())
    {
      switch(exp.Term)
      {
      case 0: exp.Term.val( fast_test );
        break;
      case 1: exp.Term.val( slower_test(bla, bla) );
        break;
      case 2: exp.Term.val( really_slow_test(bla, bla ,bla) );
        break;
      default: exp.Evaluated( true );
      }
      exp.NextTerm();
    }
    
    if( exp.isTrue() ) { 
    // .. then code ..
    }
    

    The cool stuff happens inside OptimizeExpression. The first iteration of the while loop constructs all the terms and captures the initial term timing. The start/end times of each expression are captured and sorted using some appropriate ranking ie. time/probability with a running average. The next iteration of the while loop will adjust the order of evaluation using the case statement assigned to each term. This executes the fastest terms with the highest probability of being false for AND expressions and true for OR expression. Considerations for using this are: 1) need to evaluate ALL terms in the expression to gather the statistics of unknown/changing execution times and probabilities 2) this expression has to be something you do a lot, ie. looping through this expression hundreds of times with different data. 3) the “while switch” construct should replace any “if” without scoping issues. 4) the term evaluation needs to be expensive to compensate for the overhead of this mechanism (although it has little impact I can see)

    • Steven Pigeon says:

      I would think it would be most useful when you gather the run-time behavior, then use the collected data to hard code the final result: like a profile-guided optimization. With gcc you can instrument the code to gather this data and the compiler is able to use it to generate better code.

      Did you actually looked at the code generated / measured the overhead ?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 74 other followers

%d bloggers like this: