Next: 4 Experiments Up: Needle Map Recovery Previous: 2 Shape from Shading

3 The Robust Regularizer

The major criticism of the variational approach to shape-from-shading is that the quadratic regularizing term over-smoothes the recovered needle-map. In particular, the quadratic regularizing term discourages sudden changes in surface normal direction across a surface. The main consequence is to blur high-curvature surface features. Our aim in this paper is to illustrate how the smoothness constraints can be controlled more effectively using error kernels suggested by robust statistics. A review of the alternatives offered in the literature leads us to a new class of error-kernel for the shape-from-shading problem.

3.1 The General Update Equation

In this section we illustrate how the variational calculus can be applied to a general robust regularizer to develop iterative equations for the recovery of needle-maps. The robust error function is defined on the data error-residual . The quantity controls the width of the error kernel. It is also convenient to couch the process of robust estimation in terms of an influence-function which can be used to weight parameter estimates according to their associated error-residuals. Formally, the influence function is related to the error-function by .

The novel contribution in this paper is to use the robust error kernel as a smoothness prior for the derivatives of the needle-map. Specifically, we adopt the following form for regularised energy-function



In other words, we apply robust-error kernels separately to the magnitudes of the derivatives of the needle-map in the x and y directions. Applying variational calculus to the energy function in a directly analogous fashion to Section 2, yields the Euler equation



As a result, the fixed-point iterative equation for updating the components of the needle map is



This result is entirely general. Any robust error kernel can be inserted into the above result to yield a shape-from-shading scheme. However, it must be stressed that performance is critically determined by the choice of error-kernel.

3.2 Choice of Robust Regularizer

Although robust estimation is usually posed in terms of the influence function , it is the form of the associated error or ``energy'' function that is of primary importance in smoothness regularization. Formally, the energy is related to the first-moment of the influence function, i.e. . Moreover, the asymptotic properties of the derivative of the energy function allows us to establish a broad-based taxonomy of the available influence functions. Broadly speaking, there are three classes of influence function. The first of these are referred to as re-descending. Here the derivative of the energy function asymptotically approaches zero. In the second case the derivative of the energy function is referred to as sigmoidal if it becomes asymptotically constant. The third category includes those which are monotonically increasing, into which falls the quadratic prior.

According to this taxonomy, Tukey's bi-weight is re-descending. Huber's robust kernel [ 8 ] is sigmoidal. Both are defined in a piecewise manner, and so are not particularly amenable to variational treatment. Li's [ 11 ] adaptive potential functions are continuous, but all fall into the re-descending category.

Influence functions with monotonically increasing energy function derivatives tend to over-smooth genuine discontinuities in image brightness, since such discontinuities lead to large values of smoothness error. Conversely, re-descending influence functions do not penalise sharp changes in surface orientation. Although this leads to improved treatment of discontinuities, it can be at the expense of increased noise sensitivity. Sigmoidal influence functions, represent a compromise between the dual aims of recovering discontinuities and rejecting noise artifacts.

Unfortunately, none of the available sigmoidal error-kernels are defined in a continuous manner. In order to pursue the variational analysis of the sigmoidal case, we will present a continuous variant Huber's kernel, based on a hyperbolic tangent function.

Sigmoidal-Derivative Robust Regularizer

The classical example of a sigmoidal-derivative energy function is Huber's estimator (Figure 1), defined by the following influence function and error-function

 
Figure 1: The Huber regularizer: (left), (middle) and (right).

As pointed out earlier, the piecewise nature of Huber's estimator renders it unsuitable for variational analysis. To provide a continuous counterpart to Huber's error-kernel, we have investigated the following influence and energy functions, which have the qualitative shapes shown in Figure 2

 
Figure 2: The sigmoidal-derivative regularizer: (left), (middle) and (right).

Substituting the sigmoidal regularizer into the generalised needle-map update equation yields the following result



It is illuminating to consider the behaviour of this update equation for small and large smoothness errors. Firstly, the averaging of the neighbourhood normals is moderated by a function of the form . This averaging effect is most pronounced when the smoothness error is small. The remaining contribution to the smoothness process is of the form . This term vanishes at the origin and tends towards zero for large values of , only kicking-in at intermediate error conditions.



Next: 4 Experiments Up: Needle Map Recovery Previous: 2 Shape from Shading

Benoit Huet
Tue Jul 8 11:31:38 BST 1997