An Optimal Family of Exponentially Accurate One-Bit Sigma-Delta Quantization Schemes Percy Deift

Please download to get full document.

View again

of 35
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report



Views: 0 | Pages: 35

Extension: PDF | Download: 0

Related documents
An Optimal Family of Exponentially Accurate One-Bit Sigma-Delta Quantization Schemes Percy Deift∗ C. Sinan G¨ urk∗ unt¨ Felix Krahmer∗† January 21,…
An Optimal Family of Exponentially Accurate One-Bit Sigma-Delta Quantization Schemes Percy Deift∗ C. Sinan G¨ urk∗ unt¨ Felix Krahmer∗† January 21, 2010 Abstract Sigma-Delta modulation is a popular method for analog-to-digital conversion of bandlimited signals that employs coarse quantization coupled with oversampling. The standard mathematical model for the error analysis of the method measures the per- formance of a given scheme by the rate at which the associated reconstruction error decays as a function of the oversampling ratio λ. It was recently shown that exponen- tial accuracy of the form O(2−rλ ) can be achieved by appropriate one-bit Sigma-Delta modulation schemes. By general information-entropy arguments r must be less than 1. The current best known value for r is approximately 0.088. The schemes that were designed to achieve this accuracy employ the “greedy” quantization rule coupled with feedback filters that fall into a class we call “minimally supported”. In this paper, we study the minimization problem that corresponds to optimizing the error decay rate for this class of feedback filters. We solve a relaxed version of this problem exactly and provide explicit asymptotics of the solutions. From these relaxed solutions, we find asymptotically optimal solutions of the original problem, which improve the best known exponential error decay rate to r ≈ 0.102. Our method draws from the theory of orthogonal polynomials; in particular, it relates the optimal filters to the zero sets of Chebyshev polynomials of the second kind. 1 Introduction Conventional Analog-to-Digital (A/D) conversion systems consist of two basic steps: sam- pling and quantization. Sampling is the process of replacing the input function (x(t))t∈R by a sequence of its sample values (x(tn ))n∈Z . The sampling instances (tn ) are typically uniform, i.e., tn = nτ for some τ 0. It is well known that this process incurs no loss of information if the signal is bandlimited and τ is sufficiently small. More precisely, if supp xb ⊂ [−Ω, Ω], 1 then it suffices to pick τ ≤ τcrit := 2Ω , and the sampling theorem provides a recipe for perfect ∗ Courant Institute of Mathematical Sciences, New York University, New York, NY, USA. † Hausdorff Center for Mathematics, Universit¨at Bonn, Bonn, Germany. 1 reconstruction via the formula X x(t) = τ x(nτ )ϕ(t − nτ ), (1) n∈Z where ϕ is any sufficiently localized function such that  1, |ξ| ≤ Ω, ϕ(ξ) b = (2) 0, |ξ| ≥ 2τ1 , which we shall refer to as the admissibility condition for ϕ. Here, x b and ϕb denote the Fourier transform of x and ϕ, respectively. In this paper, we shall work with the following normalization of the Fourier transform on R: Z ∞ f (ξ) := b f (t)e−2πiξt dt. −∞ The value ρ := 1/τ is called the sampling rate, and ρcrit := 1/τcrit = 2Ω is called the critical (or Nyquist) sampling rate. The oversampling ratio is defined as ρ λ := . (3) ρcrit We shall assume in the rest of the paper that the value of Ω is arbitrary but fixed. The next step, quantization, is the process of discretization of the amplitude, which involves replacing each x(tn ) by another value qn suitably chosen from a fixed, typically finite set A (the alphabet). The resulting sequence (qn ) forms the raw digital representation of the analog signal x. The standard approach to recover an approximation x˜ to the original analog signal x is to use the reconstruction formula (1) with x(nτ ) replaced by qn , which produces an error signal e := x − x˜ given by X  e(t) = τ x(nτ ) − qn ϕ(t − nτ ). (4) n∈Z The quality of this approximation can then be measured by a variety of functional norms on the error signal. In this paper we will use the norm kekL∞ which is standard and arguably the most meaningful. Traditionally, A/D converters have been classified into two main families: Nyquist-rate converters (λ ≈ 1) and oversampling converters (λ  1). When τ = τcrit , the set of functions {ϕ(· − nτ ) : n ∈ Z} forms an orthogonal system. Therefore, a Nyquist-rate converter necessarily has to keep |x(nτ ) − qn | small for each n in order to achieve small overall reconstruction error; this requires that the alphabet A forms a fine net in the range of the signal. On the other hand, oversampling converters are not bound by this requirement because as τ decreases (i.e., λ increases), the kernel of the operator X Tτϕ : (cn ) 7→ cn ϕ(· − nτ ) (5) n∈Z 2 gets “bigger” in a certain sense, and small error can be achieved even with very coarse alpha- bets A. The extreme case is a one-bit quantization scheme, which uses A = {−1, +1}. For the circuit engineer, coarse alphabets mean low-cost analog hardware because increasing the sampling rate is cheaper than refining the quantization. For this reason, oversampling data converters, in particular, Sigma-Delta (Σ∆) modulators (see section 2.1 below) have become more popular than Nyquist-rate converters for low to medium-bandwidth signal classes, such as audio [12]. On the other hand, the mathematics of oversampled A/D conversion is highly challenging as the selection problem of (qn ) shifts from being local (i.e., the qn ’s are chosen independently for each n) to a more global one; a quantized assignment qn ∈ A should be computed based not only on the current sample value x(tn ), but also taking into account sample values x(tk ) and assignments qk in neighboring positions. Many “online” quantization applications, such as A/D conversion of audio signals, require causality, i.e., only quantities that depend on prior instances of time can be utilized. Other applications, such as digital halftoning, may not be strictly bound by the same kind of causality restrictions although it is still useful to process samples in some preset order. In both situations, the amount of memory that can be employed in the quantization algorithm is one of the limiting factors determining the performance of the algorithm. This paper is concerned with the approximation theory of oversampled, coarse quanti- zation, in particular, one-bit quantization of bandlimited functions. Despite the vast engi- neering literature on the subject (e.g., see [12]), and a recent series of more mathematically oriented papers (e.g., [5, 15, 7, 8, 9, 1, 2]), the fundamental question of how to carry out optimal quantization remains open. After the pioneering work of Daubechies and DeVore on the mathematical analysis and design of Σ∆ modulators [5], more recent work showed that exponential accuracy in the oversampling ratio λ can be achieved by appropriate one-bit Σ∆ modulation schemes [7]. The best achievable error decay rate for these schemes was O(2−rλ ) with r ≈ 0.076 in [7]. Later, with a modification, this rate was improved to r ≈ 0.088 in [11]. It is known that any one-bit quantization scheme has to obey r 1, whereas it is not known if this upper bound is tight [4, 7]. This paper improves the best achievable rate further to r ≈ 0.102 by designing an optimal family of Sigma-Delta modulation schemes within the class of so-called “minimally supported” recursion filters. These schemes were introduced in [7] together with a number of accompanying open problems. The main results of this paper (see Theorems 4.1, 5.5, 5.6) are based on the solution of one of these problems, namely, the optimization of the minimally supported recursion filters that are used in conjunction with the greedy rule of quantization. One of our main results is that the supports of these optimal filters are given by suitably scaled zero sets of Chebyshev polynomials of the second kind, and we use this result to derive our improved exponent for the error bound. The paper is organized as follows. In Section 2, we review the basic mathematical theory of Σ∆ modulation as well as the constructions and methods of [7] which will be relevant to this paper, such as the family of minimally supported recursion filters, the greedy quanti- zation rule, and fundamental aspects of the error analysis leading to exponentially decaying error bounds. As we explain, the discrete optimization problem introduced in [7] plays a key role in the analysis. In Section 3, we introduce a relaxed version of the optimization 3 problem, which is analytically tractable. This relaxed problem is solved in Section 4 and analyzed asymptotically in Section 5 as the order of the reconstruction filter goes to infinity. Section 5 also contains the construction of asymptotically optimal solutions to the discrete optimization problem, which yields the exponential error decay rate mentioned above. Fi- nally, we extend our results to multi-level quantization alphabets in Section 6. We also collect, separately in the Appendix, the properties and identities for Chebyshev polynomials which are used in the proofs of our results. 2 Background on Σ∆ modulation 2.1 Noise shaping and feedback quantization Σ∆ modulation is a generic name for a family of recursive quantization algorithms that utilize the concept of “noise shaping”. (The origin of the terminology Σ∆, or alternatively ∆Σ, goes back to the patent application of Inose et al [10] and refers to the presence of certain circuit components at the level of A/D circuit implementation; see also [12, 13].) Let (yn ) be a general sequence to be quantized (for example, yn = x(nτ )), (qn ) denote the quantized representation of this sequence, and (νn ) be the quantization error sequence (i.e., the “noise”), defined by ν := y − q. Noise shaping is a quantization strategy whose objective is to arrange for the quantization noise ν to fall outside the frequency band of interest, which, in our case is the low-frequency band. Note that the effective error we are interested in is e = Tτϕ (ν), hence one would like ν to be close to the kernel of Tτϕ . It is useful to think of Tτϕ (ν) as a generalized convolution of the sequence ν and the function ϕ (sampled at the scale τ ). Let us introduce the notation ν ~τ ϕ := Tτϕ (ν). (6) Note that (a ∗ b) ~τ ϕ = a ~τ (b ~τ ϕ) and a ~τ (ϕ ∗ ψ) = (a ~τ ϕ) ∗ ψ, etc., where a and b are sequences on Z, ϕ and ψ are functions on R, and ∗ denotes the usual convolution operation (on Z and R). By taking the Fourier transform of (5) one sees that the kernel of Tτϕ consists of sequences ν that are spectrally disjoint from ϕ, i.e., “high-pass” sequences, since ϕ is a “low-pass” function, as apparent from (2). Thus, arranging for ν to be (close to) a high-pass sequence is the primary objective of Σ∆ modulation. High-pass sequences have the property that their Fourier transforms vanish at zero fre- quency (and possiblyP in 2πinξ a neighborhood). For a finite high-pass sequence s, the Fourier transform sb(ξ) := sn e has a factor (1 − e2πiξ )m for some positive integer m, which m means that s = ∆ w for some finite sequence w. Here, ∆ denotes the finite difference operator defined by (∆w)n := wn − wn−1 . (7) The quantization error sequence ν = y − q, however, need not be finitely supported, and therefore νb need not be defined as a function (since ν is bounded at best). Nevertheless, this spectral factorization can be used to model more general high-pass sequences. Indeed, a Σ∆ 4 modulation scheme of order m utilizes the difference equation y − q = ∆m u (8) to be satisfied for each input y and its quantization q, for an appropriate auxiliary sequence u (called the state sequence). This explicit factorization of the quantization error is useful if u is a bounded sequence, as will be explained in more detail in the next subsection. In practice, (8) is used as part of a quantization algorithm. That is, given any sequence (yn )n≥0 , its quantization (qn )n≥0 is generated by a recursive algorithm that satisfies (8). This is achieved via an associated “quantization rule” qn = Q(un−1 , un−2 , . . . , yn , yn−1 , . . . ), (9) together with the “update rule” m   X k−1 m un = (−1) un−k + yn − qn , (10) k=1 k which is a restatement of (8). Typically one employs the initial conditions un = 0 for n 0. In the electrical engineering literature, such a recursive procedure for quantization is called “feedback quantization” due to the role qn plays as a (nonlinear) feedback term via (9) for the difference equation (8). Note that if y and q are given unrelated sequences and u is determined from y − q via (8), then u would typically be unbounded for m ≥ 1. Hence the role of the quantization rule Q is to tie q to y in such a way as to control u. A Σ∆ modulator may also arise from a more general difference equation of the form y−q =H ∗v (11) where H is a causal sequence (i.e., Hn = 0 for n 0) in `1 with H0 = 1. If H = ∆m g with g ∈ `1 , then any (bounded) solution v of (11) gives rise to a (bounded) solution u of (8) via u = g ∗ v. Thus (11) can be rewritten in the canonical form (8) by a change of variables. Nevertheless, there are significant advantages to working directly with representations of the form (11), as explained in Section 2.3 below. 2.2 Basic error estimates Basic error analysis of Σ∆ modulation only relies on the boundedness of solutions u of (8) for given y and q, and not on the specifics of the quantization rule Q that was employed. It is useful, however, to consider arbitrary quantizer maps M : y 7→ q that satisfy (8) for some m and u, where u may or may not be bounded. Note that if u is a solution to (8) for a given triple (y, q, m), then u˜ := ∆u is a solution for the triple (y, q, m−1). Hence a given map M can be treated as a Σ∆ modulator of different orders. Formally, we will refer to the pair (M, m) as a Σ∆ modulator of order m. As indicated above, in order for a Σ∆ modulator (M, m) to be useful, there must be a bounded solution u to (8). (Note that, up to an additive constant, there can be at most one 5 such bounded solution once m 0.) Moreover, one would like this to be the case for all input sequences y in a given class Y, such as Yµ = {y : ||y||`∞ ≤ µ} for some µ 0. In this case, we say that (M, m) is stable for the input class Y. Clearly, if (M, m) is stable for Y, then (M, m−1) is stable for Y as well. To any quantizer map M and a class of inputs Y, we assign its maximal order m∗ (M, Y) via m∗ (M, Y) := sup m : ∀y ∈ Y, ∃u ∈ `∞ such that y − M(y) = ∆m u .  (12) Note that both 0 and ∞ are admissible values for m∗ (M, Y). With this notation, (M, m) is stable for the class Y if and only if m ≤ m∗ (M, Y). Stability is a crucial property. Indeed, it was shown in [5] that a stable m-th order scheme with bounded solution u results in the error bound kekL∞ ≤ kuk`∞ kϕ(m) kL1 τ m , (13) where ϕ(m) denotes the mth order derivative of ϕ. The proof of (13) employs repeated summation by parts, giving rise to the commutation relation (∆m u) ~τ ϕ = u ~τ (∆m τ ϕ), (14) where ∆τ is the finite difference operator at scale τ defined by (∆τ ϕ)(·) := ϕ(·) − ϕ(· − τ ). The left hand side of (14) is simply equal to the error signal e by definition. On the other hand, the right hand side of this relation immediately leads to the error bound (13). As u is not uniquely determined by the equation y − M(y) = ∆m u, it is convenient to introduce the notation U (M, m, y) := inf kuk`∞ : y − M(y) = ∆m u ,  (15) and for an input class Y U (M, m, Y) := sup U (M, m, y). (16) y∈Y We are interested in applying (13) to sequences y that arise as samples yn = x (nτ ) of a bandlimited signal x as above. To compare the error bounds for different values of τ , one could consider the class Y(x) = {y = (yn )n∈Z : yn = x(nτ ) for some τ } and work with the constant U (M, m, Y(x) ). However, it is difficult to estimate U (M, m, Y(x) ) in a way that accurately reflects the detailed nature of the signal x. Instead, we note that for kxkL∞ ≤ µ, one has Y(x) ⊂ Yµ , where Yµ = {y = (yn )n∈Z : ||y||`∞ ≤ µ} as defined above. This leads to the bound kekL∞ ≤ U (M, m, Yµ )kϕ(m) kL1 τ m . (17) In (17), the reconstruction kernel ϕ is restricted by the τ -dependent admissibility condi- tion (2). However, if a reconstruction kernel ϕ0 is admissible in the sense of (2) for τ = τ0 = 1/ρ0 , then it is admissible for all τ τ0 . This allows one to fix ρ0 = (1+)ρcrit for some small (m)  0, and set ϕ = ϕ0 . Now Bernstein’s inequality1 implies that kϕ0 kL1 ≤ (πρ0 )m kϕ0 kL1 1 If fb is supported in [−A, A], then kf 0 kLp ≤ 2πAkf kLp for 1 ≤ p ≤ ∞. 6 since ϕb0 is supported in [− ρ20 , ρ20 ]. This estimate allows us to express the error bound natu- rally as a function of the oversampling ratio λ defined in (3), as follows: Let  and ϕ0 be fixed as above, let M be a given quantizer function, and let m be a positive integer. Then for any given Ω-bandlimited function x with kxkL∞ ≤ µ, the quantization error e = eλ , as expressed in (4) in terms of τ = λρ1crit , can be bounded in terms of λ: keλ kL∞ ≤ U (M, m, Yµ )kϕ0 kL1 π m (1 + )m λ−m . (18) As the constants are independent of λ, this yields an O(λ−m ) bound as a function of λ. Note that any dependency of the error on the original bandwidth parameter Ω has now been effectively absorbed into the fixed constant kϕ0 kL1 . In fact, this quantity need not even depend on Ω; a reconstruction kernel ϕ∗0 can be designed once and for all corresponding to Ω = 1, and then employed to define ϕ0 (t) := Ωϕ∗0 (Ωt), which is admissible and has the same L1 -norm as ϕ∗0 . 2.3 Exponentially accurate Σ∆ modulation The O(λ−m ) error decay rate derived in the previous section is based on using a fixed Σ∆ modulator. It is possible, however, to improve the bounds by choosing the modulator adaptively as a function of λ. In order to obtain an error decay rate better than polynomial, one needs an infinite family ((Mm , m))∞ 1 of stable Σ∆ modulation schemes from which the optimal scheme Mmopt (i.e., one that yields the smallest bound in (18)) is selected as a function of λ. This point of view was first pursued systematically in [5]. Finding a stable Σ∆ modulator is in general a non-trivial matter as m increases, and especially so if the alphabet A is a small set. The extreme case is one-bit Σ∆ modulation, i.e., when card(A) = 2. The first infinite family of arbitrary-order, stable one-bit Σ∆ modulators was also constructed in [5]. In the one-bit case, one may set A = {−1, +1} to normalize the amplitude, and choose µ ≤ 1 when defining the input class Y = Yµ . The optimal order mopt and the size of the resulting bound on keλ kL∞ depend on the constants U (Mm , m, Yµ ). There are a priori lower bounds on U (M, m, Yµ ) for any 0 µ ≤ 1 and any quantizer M. Indeed, as shown in [7], one obtains a super-exponential lower bound on U (M, m, Yµ ) by considering the average metric entropy of the space of bandlimited functions, as follows. Define the quantity Um (Y) := inf U (M, m, Y) (19) M and let Xµ := {x : supp x b ∈ [−1/2, 1/2], kxkL∞ ≤ µ}. (20) Then, as shown in [6, 7], one has for any one-bit quantizer sup{keλ kL∞ : x ∈ Xµ } &µ 2−λ , (21) which yields, when used with (18), the result that Um (Yµ ) &µ (mc)m for some absolute constant c 0 [7]. Here by A &s B, we mean that A ≥ CB for some constant C that may depend on s, but not on any other input variables of A and B. 7 For the family of Σ∆ modulators constructed in [5], which we will denote by (Dm )∞ m=1 , the best upper bounds on U (Dm , m, Yµ ) are of order exp(cm ), resulting in the order-optimized 2 error bound keλ kL∞ . exp(−c(log λ)2 ), which is substantially larger than the exponentially small lower bound of (21). The first construction that led to exponentially accurate Σ∆ modulation was given later in [7]. The associated modulators, here denoted by (Gm )∞ m=1 , satisfy the bound U (Gm , m, Yµ ) . (ma)m , (22) for a = a(µ) 0. Substituting (22) into (18) and using the elementary inequality min mm α−m . e−α/e (23) m with α = λ/(aπ(1 + )), one obtains the order-optimized error bound sup{keλ kL∞ : x ∈ Xµ } . 2−rλ , (24) where r = r(µ) = (aπe(1 + ) log 2)−1 . On the other hand, the smallest achievable value of a is shown to be 6/e, which corresponds to the largest achievable value of r = rmax ((Gm )∞ m=1 ) ≈ (6π log 2)−1 ≈ 0.076. Observe from (21)
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks