Enforcing Integrability by Error Correction using l 1 -minimization

Please download to get full document.

View again

of 8
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Government & Politics

Published:

Views: 0 | Pages: 8

Extension: PDF | Download: 0

Share
Related documents
Description
Enforcing Integrability by Error Correction using - Dikpal Reddy University of Maryland College Park, MD, Amit Agrawal Mitsubishi Electric Research Labs Cambridge, MA,
Transcript
Enforcing Integrability by Error Correction using - Dikpal Reddy University of Maryland College Park, MD, Amit Agrawal Mitsubishi Electric Research Labs Cambridge, MA, Rama Chellappa University of Maryland College Park, MD, Abstract Surface reconstruction from gradient fields is an important final step in several applications involving gradient manipulations and estimation. Typically, the resulting gradient field is non-integrable due to linear/non-linear gradient manipulations, or due to presence of noise/outliers in gradient estimation. In this paper, we analyze integrability as error correction, inspired from recent work in compressed sensing, particulary l 0 equivalence. We propose to obtain the surface by finding the gradient field which best fits the corrupted gradient field in sense. We present an exhaustive analysis of the properties of solution for gradient field integration using linear algebra and graph analogy. We consider three cases: (a) noise, but no outliers (b) no-noise but outliers and (c) presence of both noise and outliers in the given gradient field. We show that solution performs as well as least squares in the absence of outliers. While previous l 0 equivalence work has focused on the number of errors (outliers), we show that the location of errors is equally important for gradient field integration. We characterize the solution both in terms of location and number of outliers, and outline scenarios where solution is equivalent to l 0 solution. We also show that when solution is not able to remove outliers, the property of local error confinement holds: i.e., the errors do not propagate to the entire surface as in least squares. We compare with previous techniques and show that solution performs well across all scenarios without the need for any tunable parameter adjustments. 1. Introduction Surface reconstruction from gradient fields is an important final step in many vision and graphics applications involving gradient domain processing. These can be broadly classified as (a) manipulating gradients and/or (b) estimating gradients before integration. Methods such as Photometric Stereo (PS) [25] and Shape from Shading (SfS) [13] estimate the gradient field of the surface from captured images. Applications such as image editing [19], stitching [18], HDR compression [9] etc. first apply local/global manipulations to the gradient field of single/multiple images. The final image is then reconstructed from the modified gradient field. The reader is referred to the recent course [2] for more details. Typically, the resulting gradient field is non-integrable due to linear/non-linear gradient manipulations, or due to presence of noise/outliers in gradient estimation (figure 1). For a reconstruction algorithm, two important considerations are (a) robustness or ability to handle outliers and (b) local error confinement (LEC) [1]. Robustness means that surface features are well reconstructed in presence of outliers. A related property is LEC, which ensures that distortions in the integrated surface are confined spatially close to the errors in the underlying gradient field. It is well-known that least squares estimate is not robust in presence of outliers. While several integration techniques have been proposed before, we analyze robust surface integration as an error correction problem. We are inspired from recent work in compressed sensing [5], particularly l 0 equivalence. We propose to obtain the surface by finding the gradient field which best fits the corrupted gradient field in the -norm sense. While minimizing the -norm is not new as a robust statistic, we analyze the properties of solution and provide new insights using linear algebra and graph analogy. We compare with existing techniques and show that solution performs well across all scenarios without the need for any tunable parameter adjustments Contributions We analyze robust gradient integration as error correction by utilizing ideas from sparse signal recovery literature. We show that the location of errors is as important as the number of errors for gradient integration, which is not typically explored when considering l 0 equivalence. We exhaustively analyze the properties of solution in terms of robustness and LEC for various outlier patterns and noise in given gradient field. 1.2. Related work Enforcing integrability: The simplest approach is to find an integrable gradient field (or the surface) which best fits the given gradient field, by minimizing the least squares cost function. This amounts to solving the Poisson equation [24]. Frankot & Chellappa [11] project the given gradient field onto integrable functions using Fourier basis to enforce integrability. Cosine basis functions were proposed in [12], while Kovesi [17] proposed shapelets as a redundant set of basis functions. Petrovic et al. [21] used a loopy belief propagation algorithm to find the corresponding integrable field. Methods based on l 2 -norm cannot handle large outliers in the gradient field. Robust estimation: There has been large body of work on robust parametric estimation using RANSAC [10], which becomes combinatorial in nature as the number of parameters increases. For gradient integration on N N grid, there are N 2 unknowns (pixel values) and 2N 2 observations (x and y gradients). Thus, RANSAC is computationally prohibitive [3]. M-estimators modify the least squares cost function to reduce the influence of outliers. Several such influence functions such as Huber, Tukey, etc. have been proposed [14, 22]. Agrawal et al. [3] proposed a general framework for robust gradient integration by gradient transformations, such as anisotropic weighting and affine transformations. The diffusion algorithm in [3] solves a modified Poisson equation by applying edge preserving affine transformations to the gradient field. To calculate the local surface edge direction, the algorithm uses gradient values in a neighborhood. We show that it performs poorly when the neighborhood of an edge is corrupted by outliers. Our approach instead minimizes the -norm of gradient errors. Minimizing the -norm has been shown to be effective in correcting outlier errors and recovering sparse signals [8, 15, 18]. Traditionally, -norm is not preferred since the cost function is not analytically differentiable and is computationally expensive. However, there has been a renewed interest in using cost functions due to l 0 equivalence for sparse reconstructions under the restricted isometry property (RIP). We use RIP to show that for gradient integration, the location of outliers is as important as their number. In addition, we use the expander graph structure of gradient-curl pairs to understand the distribution of outliers which can be corrected. Graph based approach: To avoid the combinatorial nature of RANSAC, a greedy graph based technique was proposed in [1]. This approach treats the underlying 2D grid as a graph, gradients as edges and unknown surface values as nodes. The outlier gradients are heuristically determined by thresholding the curl values over the graph and the corresponding edges are removed. If the graph remains connected, surface could be integrated using the remaining p Figure 1. (Left) Ground truth p for Mozart (Right) Outliers along possible shadow regions in the gradient field obtained from PS. The magnitude of the outliers is 5 times the largest ground truth gradient values. edges (gradients). Else, a minimal set of edges are chosen to connect the graph by assigning edge weights using gradient magnitude or curl values. However, the underlying heuristic of using curl values as a goodness measure often fails in presence of noise. We show that [1] performs poorly in presence of noise and that minimizing the -norm effectively handles noise as well as corrects sparsely distributed outliers in the gradient field. In addition, when the outliers are concentrated, LEC property is maintained. Denoising and TV regularization: Image denoising is a classical problem and several approaches for feature preserving denoising have been successfully demonstrated. Anisotropic filtering [20] takes into account the local edge direction in a PDE based framework. Rudin et al. [23]proposed total variation (TV) regularization, which penalizes the -norm of the gradients of the estimated (denoised) image. Note that our approach is different: we minimize the -norm of gradient errors, not gradients themselves. Thus, we do not employ any assumptions on the underlying surface such as natural image statistics (distribution of gradients is peaked at zero). 2. Gradient integration as error correction We use terminology from [1]. Let S(y, x) be the desired surface over a rectangular grid of size H W. In vector form, we denote it by s. Let (p, q) denote the given nonintegrable gradient field, possibly corrupted by noise and outliers. The goal is to estimate S from (p, q). The integrable gradient field of S is given by the forward difference equations p 0 (y, x) =S(y, x +1) S(y, x) (1) q 0 (y, x) =S(y +1,x) S(y, x). In vector form (1) can be written as [ ] p 0 Ds = q 0 = g 0, (2) where g 0 denotes the stacked gradients and D denotes the gradient operator matrix. Each row of D has two non-zero entries: ±1 in pixel positions corresponding to that particular gradient. The curl of the gradient field can be defined as p e loop integrals around a box of four pixels [1] curl(y, x) =p(y +1,x) p(y, x)+q(y, x) q(y, x+1) which can be written as [ ] p d = C = Cg. (3) q Here, d denotes the vector of stacked curl values and C denotes the curl operator matrix. Each row of C has only four non-zero entries (±1) corresponding to the gradients associated with the loop integral. Since the gradient field g 0 is integrable, Cg 0 =0.However, for the given non-integrable gradient field g, Cg 0. Decomposing g as the sum of g 0 and a gradient error field e, we get g = g 0 + e = Ds + e. (4) Applying the curl operator on both sides, we obtain d = Ce (5) Thus, integrability can also be defined as error correction: the goal is to estimate the gradient error field e given the curl d of the corrupted gradient field. Note that in this formulation, there are M = HW knowns (curl values) and N =2HW unknowns (error gradients), leading to an under-determined system of linear equations. We use p to denote the l p -norm. e 0 simply counts the nonzero elements of e. Poisson solver finds a least squares fit to the gradients by solving ê = arg min e 2 s.t. d = Ce. (6) The least squares estimate is optimal when the gradient errors obey a Gaussian distribution. If the errors contains outliers, then the estimate is skewed leading to severe artifacts in the reconstructed surface or image. Outliers in the gradient field can be understood as arbitrarily large errors and could obey any distribution. An example of the errors in gradients obtained from PS is shown in figure 1. l 0 -: An approach to handle outliers is to combinatorially search for the possible locations of outliers, estimate them subject to the curl constraint (5) and pick the combination which satisfies the constraints the best. This can be written mathematically as ê = arg min e 0 s.t. d = Ce. (7) This problem is NP-hard and hence computationally infeasible. -: Instead, we solve a convex relaxation of (7) by replacing the l 0 -norm of the gradient error e with the -norm. The conditions under which this equivalence holds true is described in detail in Sec. 4. ê = arg min e 1 s.t. d = Ce. (8) (8) can be solved using convex optimization algorithms in polynomial time. Grid & Curl Spanning Tree ST & Errors Gradients & Curl Figure 2. (a) Graph with pixels as nodes & gradients as edges. Curl is calculated along 2 2 loops (b) Spanning tree edges in black (c) Gradient errors in dashed lines (d) Solid black lines and red curl loops have expander graph structure 3. Graph based interpretation In [1], a graph-based interpretation is provided for integrating the gradient field corrupted by outliers. We discuss this method and borrow its framework to explain our approach. [1] treats the pixel grid as a graph (G, E), where the pixels are the nodes of the graph and gradients correspond to the edges of the graph (figure 2(a)). Let us first assume that the location of outliers (bad gradients) are known. [1] proposes to remove the corresponding edges from the graph. If the resulting sub-graph remains connected, then integration could be done using the remaining edges/gradients. Else, the graph is connected using a minimal set of edges, by assigning edge weights based on gradient magnitude or curl values. Since in practice, the location of outlier gradients are not known, [1] thresholds the curl values as a heuristic. Relationship with l 0 -: Note that the key idea is that for integration to be possible, the resulting graph has to be connected. Thus, the minimal set of gradients required for integration should correspond to a spanning tree (ST) [3] as shown in figure 2(b). First, let us assume that the graph remains connected after removing the edges corresponding to outlier gradients. Then, it is easy to see that [1] is a greedy algorithm for l 0 -. This is because the resulting sub-graph trivially minimizes the l 0 -norm of gradient errors. However, the important point is that even if we know the location of outliers, it does not guarantee error-free reconstruction, since the resulting sub-graph needs to be connected. For example, it is easy to see that if all 4 edges of a node are removed, graph does not remain connected (figure 3, clique-5). On other hand, if the errors are distributed as shown in figure 3 (right), perfect reconstruction can be achieved. Thus, even l 0 - does not guarantee perfect reconstruction. It can handle up to 25% outliers 1, but can fail for as low as 4 outliers. While recent work in compressed sensing [5] has focused on the number of errors (outliers), the location of outliers is equally important for gradient reconstruction problem. Since l 0 - can fail depending on spatial distribution of errors, it is important to consider it while analyzing l 0 equivalence. RANSAC: In gradient integration, RANSAC would 1 In general, l 0 - can handle up to 50% outliers. For gradient integration, a unique solution can be obtained only for maximum of 25% outliers search over different realizations of ST and pick the one which rejects most outliers. As shown in [3], since the number of parameters are large, RANSAC is computationally prohibitive Performance under noise Note that a robust algorithm should also be able to work well in presence of noise. In [1], a heuristic is used to estimate outlier errors, assuming that the non-zero curl values are related to outlier gradients. However, this assumption is well suited only when the gradient field is corrupted by outliers and fails in presence of noise. Under noise, the algorithm in [1] confuses correct gradients as outliers and performs poorly as shown in figure 5. In presence of noise, gradient error e is non-sparse with the largest components corresponding to outliers. To handle noise, the cost function is modified to ê = arg min e 1 s.t. d Ce 2 ɛ (9) for an appropriate ɛ. 4. l 0 equivalence One of the earliest methods in sparse signal recovery by minimizing the -norm is Basis Pursuit [8] but it is recently that conditions for equivalence between minimizing l 0 and -norm have been provided in the compressed sensing literature [5, 7, 6]. In fact, the gradient error correction problem is similar to the classical error correction problem analyzed in [7], but the location of errors is equally important as discussed in section 3. Continuing the notation, we present sufficient conditions for l 0 equivalence as described in [6]. They are e is k-sparse ( e 0 = k) The matrix C obeys RIP with isometry constant δ 2k RIP (with δ 2k ) is a sufficient condition on a matrix (C) which guarantees recovery of all k-sparse vectors (e) from its projection (d) usingl 0 - (if δ 2k 1) or - (if δ 2k 2 1). This implies that - can recover a k-sparse vector as well as l 0 - when δ 2k 2 1. C is said to satisfy RIP with isometry constant δ 2k, if the eigenvalues of C T C 2 T lie between (1 δ 2k ) and (1 + δ 2k ) for every submatrix C T, formed by choosing 2k columns with index set T. Note that the condition to recover k-sparse e is actually on 2k columns of C. This is to ensure that the true k-sparse vector is not confused with any other k-sparse vector with the same projection d, thereby ensuring a unique solution. Typically, dense matrices such as i.i.d. Gaussian or partial Fourier matrices [5] satisfy RIP for large k. 2 C is the transpose of C As discussed in section 3, if all 4 edges of a node are in error, they can t be corrected even if we knew their locations. It implies that the recovery of 4-sparse gradient error vector e using either l 0 or - is impossible. Thus, RIP doesn t hold for k =4and hence for all k 4. But, the constant δ 2k corresponding to a 2k edge set T does inform us whether any k gradient errors in T can be corrected using either l 0 or - For a 2k edge set T, δ 2k 1 means that C T C T is nonsingular. This implies that the 2D graph remains connected after removing the corresponding 2k edges T. Conversely, in figure 3 clique-5, δ 2k =1since the graph does not remain connected when all the four edges are in error Spatial distribution of errors Figure 3 lists several spatial distribution of errors in a isolated neighborhood. We qualitatively analyze which of these can be corrected with the help of isometry constant δ 2k. Few of them are described in detail below. For example, in clique-2 (2k =2), δ 2k =0.5implying clique- 1 (k =1) can be corrected perfectly by l 0 -. However, in practice - can also correct the single outlier although δ 2k 2 1. Likewise, δ 2k =0.5 in clique-8 (2k =4) implies clique-6 (k =2) can be corrected perfectly by both l 0 & -. This confirms that conditions on δ 2k are just sufficient. Nevertheless, the conditions provide insight into the error locations that can be corrected. Since the condition for recovery is stronger than l 0 recovery, there exist outlier distributions which l 0 - can correct but cannot. For example, since δ 2k =0.5in clique-8, l 0 - can correct clique-3 but cannot always. Conversely, if l 0 - cannot correct a gradient error e then neither can. In other words, - corrects less errors compared to l 0. We generalize to other outlier spatial distributions. Let T denote the indices of some 2k edge locations and T c the complement edges. If T c is not a connected subgraph, the matrix C T C T is singular and δ 2k =1. This implies that there exist k error locations in T, which l 0 - cannot correct uniquely. If T c is a connected subgraph, then the matrix C T C T is non-singular and δ 2k 1 suggesting l 0 - can correct any k error locations in T. For sufficiently small k we will have δ 2k 2 1 and - corrects all of them. For example, - can correct outliers distributed as shown in figure 3 (right) Expander graph structure Unlike typical dense matrices in compressed sensing, curl matrix C is sparse and hence doesn t satisfy RIP for even few edges in error. Each curl value carries information about four gradients and each gradient contributes to two curl values. In the graph obtained by removing the bor- Outliers in isolated cliques and error correction ability of Outlier pattern fully correctable by (1) (2) (3) (4) (5) (6) (7) (8) e 1 e 2 e e 1 3 e 1 e e e 2 e e 1 e e 2 e 1 e e e 3 e 1 e 3 e e 2 e 2 Unknowns Knowns Err. Corr. by YES MAY BE a MAY BE b MAY BE b NO YES MAY BE b MAY BE b Smallest eig. val Largest 12x12 grid with errors eig. val. Smallest & Largest eigen value =1 (a) depends on sign of errors (b) depends on sign and magnitude of errors Figure 3. (Left) Top row: Isolated cliques (set T ) in error (red). Second row: Number of unknown (gradients) and known (curls) variables. Third row: Shows whether - can correct
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x