Debouncing using Binary Finite Impulse Reponse Filter

In digital systems, we expect input signals to be noise free, but that is not always realistic. For example, let us think about an embedded device with a series of push buttons. The user interacts with the device by pressing the buttons, and we would expect, quite naively, that the micro-controller receives either one or zero depending on whether the button is pressed or not.

However, when the user presses a button, there is a short time during which the micro-controller cannot tell for sure whether the button is pressed or not. During this short time, the mechanical switch under the button establishes the contact and (electronic) noise is read. To the micro-controller, this noise appears as a short burst of random ones and zeroes between the button-not-pressed state (or zero) and the button-is-quite-pressed state (or one). The micro-controller has to decide through that burst of random bits when—and if—the button is pressed. The same thing occurs when the button is released.

The noise from the contact is usually somewhat smoothed by a small capacitor-based circuit called a debouncer. The debouncer makes sure that the signal rises smoothly from 0 (not pressed) to 1 (pressed). But even with a debouncing circuit, the signal must go through a phase where its level isn’t quite zero nor quite one, and the micro-controller’s I/O port may read either levels; resulting in multiple, rapid contacts instead of a single, longer, contiguous contact. Naturally, this situation must be avoided; if not by hardware, by software.

The contact via a mechanic switch will establish itself in a very noisy way. It will eventually settle to be fully on, but there will be a short time (well, rather long time from the point of view of the processor) where it will be between on and off. The noisy contact level will go through the debouncer circuit, which will remove most of the noise but will not threshold the signal just yet. The thresholding—transforming the signal to either zero or one—will be done by the micro-controller’s I/O pin circuitry to which the output of the debouncer is connected. The general situation is illustrated:

We would like to obtain the ideal logic levels, going to one exactly when the contact is established and going back to zero instantly when it is released. However, we do not have access to this ideal signal, we can only guess when both events occurs.


we have to estimate were the start time really is. A reasonable guess could be as follows:

We would like to remove the lone ones and fill the lone zeros, to obtain:

While this is not quite the ideal signal, it is a reasonable approximation. With further a priori known ledge on the debouncing circuitry and the switch itself, one could compensate and place the dashed line more precisely, yielding a much better approximation of the ideal levels.

What we will need to process the bits is, in fact, a digital filter, but of a special kind, one usually not covered in textbooks: a binary finite impulse response (FIR) filter. Digital signal processing theory textbooks suppose that the signal is represented in the computer with some arbitrarily high precision and that any numerical operations will succeed without so much as mere rounding errors. Realistically, even when using 16 or 24 bits samples, one faces rounding errors and similar numerical artifacts. Here, we’re in a much more dire situation: the signal is a stream of bits. That is, for the micro-controller, the signal looks like this:


and (following the previous figures) we’d like it to be understood like:


Because we will need to look at bits from both sides of a given bit to decide whether or not to “repair” it, the filter will introduce a necessary delay. The length of the delay depends on how wide we want the window around the bit to repair to be. Let us, for our example, suppose that we will need, say, 5 bits: two bits before, two bits after the bit to repair. The filter’s delay is therefore of two bits. The code for the register holding the window will look like

// adds a bit in a 5-bits
// window.
int add_bit( int bit, int * window )
  *window = ((*window << 1) | bit) & 0x1f ;

This is a basic shift register where bits enter on one side and exit on the other. To this register, you want to add a filter that decides whether you're reading a one or a zero&mdash;after a two bit delay. The code will look like:<br><br>

unsigned window=0; // initially.
// somewhere in the scan loop
// (or maybe it's read by interrupt)
int bit = read_bit();
add_bit( bit, window);
int filtered_bit = filter(window);

Note that the code doesn’t re-write the bits in the window. The filtered bit is returned, but the original bits remain untouched in the window. This is why we are talking about a finite response filter; since we do not modify the contents of the window, an error cannot propagate on the whole signal. In our case, it doesn’t propagate at all because the next decision from the filter will not take the previous decisions into account.

So, how do we implement the filter? One method is the fat bit method that just makes a bit become fat, that is, three bit wide. The filter code is minimal:

// Fat bit method
int filter(unsigned window)
  // filters 5 bits and
  // returns the middle one
  return ((window<<1)|(window>>1) >> 2) & 1;

Suppose we have our five bit window containing 01011, and the middle 0 is the bit to filter. Using the above method, we have that (01011>>1) is 00101, (01011<<1) is 10110, or-ed together it yields 10111, so the filter returns 1.

The problem with this method, however, is that the signal spreads both ways and that the algorithm never removes ones from the signal. Clearly, we need something a bit more sophisticated.

Rather than shuffling and or-ing bits in a more or less random way, the next method, a more general one, uses efficient pattern matching. Consider the following filter function:

int filter(unsigned window)
  // still, we are filtering
  // on 5 bits
  static const int filtered[32]=
    0, // 00000 --> 00000
    0, // 00001 --> 00001
    0, // 00010 --> 00010
    0, // 00011 --> 00011
    1, // 01010 --> 01110
    1, // 01011 --> 01111
    1, // 11011 --> 11111

  return filtered[window];

The filtered array can be ad hoc, created by hand using a priori knowledge, or generated by a program that computes the ideal set of patterns depending on rules you decide upon. The exact generation method doesn’t really matter, as long as you associate each possible pattern with a decision, 0 or 1.

This method is however somewhat wasteful space-wise (we are allocating 32 ints) and time-wise. Space-wise, we might want to use chars instead of ints, but there’s a much better solution; we will come back to it in a moment. Time-wise, the address generation involved in filtered[window] might be costly, relatively speaking. First, it involves pointer arithmetic (loading pointers, computing offsets, etc.) and second, it reads from memory, which, in certain cases, may be quite slow.

To save memory, in cases where the window isn’t really wide, we can replace the table by a magic number containing all the bits from the table and obtain something like:

int filter(unsigned window)
  const unsigned magic_filter=0x....;
  return (magic_filter >> window) & 1;

which is efficient if the shift instruction is efficient. This method can be used if the windows is small enough. A 32 bit unsigned constant allows a window of only 5 bits, which isn’t much; 64 bits unsigned only gives one additional bit, for a six-bit window. Fortunately, we can stretch the method a bit to include much larger windows.

Remember, in Bit Blenders, we introduced the test_bit function what allowed us to fetch the n-th bit of a block of memory? Let use it to extend the previous method:

int filter(unsigned window)
  static const unsigned char patterns[]={....};
  return test_bit(patterns,window);

where the size of the array pattern is (1<<windows_size-3), with window_size in bits. This mean that for a 16 bits window (which might be the maximum practical width) the array would need 8 KB of (read-only) memory.

Debouncing has a number of quite real applications. In embedded devices using man/machine interfaces (such as a mp3 player, or even your desktop computer’s USB keyboard) debouncing occurs quite naturally. It also occurs in industrial control where sensors-switches are placed in and around machines; often in unexpected form and shape. For example, rotary encoders may track the position of a conveyor to surprising accuracy. To do so, some rotary encoders read a Gray-coded wheel (either electrically or optically for longer operating life) and use debouncing to deal with the cases where the wheel just barely changes position. I am sure debouncing also finds an application in game programming, and that likely game controllers use it extensively.

N.B. Sorry about the posting delay. I planned to update the blog on Tuesdays, what I did in the previous entries, but already an unforeseen event came up, so I had to postpone to Wednesday. At least, I’m not a month off!

2 Responses to Debouncing using Binary Finite Impulse Reponse Filter

  1. Nicolas A. B.-N. says:

    Your post was extremely informative and to the point. Good work !

    I just noticed one typo: “[…] decide whet er”.

  2. Steven Pigeon says:

    Thanks. Corrected.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: