To better understand the palm rejection algorithm, let's first take a look at what your device sees when the hand is down (Figure 2). Here the green dot is the intentional input, and the blue dots are unintentional palm inputs.
Several properties distinguishing intentional from unintentional inputs immediately jump out:
Rather than examining these properties instantaneously (on touch down) and performing immediate classification, our algorithm makes an initial guess, then refines this guess every 50 ms until 500 ms has elapsed, at which point a final decision is made by examining the votes at each 50ms interval.
At each time step t, we examine touch point behavior over a time window from -t to t, taking the mean, standard deviation, and range of the touch radius, touch velocity, and distance to other touches (Figure 2). These behavior metrics, or features are then fed into a previously trained decision tree and a classification (intentional or unintentional input) is made.
Performing regular classification has the benefit of providing a guess which can then be used to provide feedback to the user, which might then be altered. Figure 1 shows a video of our application demonstrating this behavior: palm touches are initially guessed as styluses and are later removed. Because in most cases the palm occludes these temporary guesses, the user is often unaware of this guessing behavior.
Source code for the binary classifiers used and feature computation can be downloaded here.