RSS feedburner email feedburner Google reader Linked-in Facebook Twitter

Search. Loading...

Recent topics
[ICASSP 2011]
[TV-band access]
[Spectrum sensing]
[About CR patents]

[Calls & conferences]
[Cognitive radio links]

Sites I follow
[Nuit Blanche]
[CRT blog]
[Linda Doyle]


Powered by Blogger

<< Back to blog's front page.

Apr 16, 2010

Signal processing with compressed measurements

Signal processing.This post is about two papers related to signal processing in the compressed domain. In the special issue on Compressive Sensing in the Journal of Selected topics in Signal Processing we can find an article with title "Signal Processing With Compressive Measurements" [D10]. The abstract reads:

The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems-such as detection, classification, or estimation-and filtering problems using only compressive measurements and without ever reconstructing the signals involved. We provide theoretical bounds along with experimental results."

A general framework for signal processing of compressed measurements without reconstructing the original signal would be of major importance. However I find that the paper fails to address a key point of the problem: sparsity. If sparsity is ignored the compressed (in the usual setting) measurements reduce to a linearly transformed version of the original signal, what fits into the classical signal processing framework (since the compression matrix is assumed known). [D10] analyzes the performance of different signal processing algorithms (detection, estimation and filtering) from compressed meassurements when the compression matrices are chosen randomly fulfilling a modified RIP property. Since the only reference to the sparsity of the original signal is then through the RIP of the compression matrices higher processing performance may be achieved by reconstructing the signal (remember that RIP is a sufficient but not necessary condition for reconstruction).

A much more involved analysis for the estimation setting was presented in the compressed sensing's day at the ICASSP. In "On unbiased estimation of sparse vectors corrupted by Gaussian noise" [J10] Alexander Jung theoretically analyzes the behavior of the achievable estimation performance in the sparse setting, as opposed to [D10] that only analyzes a quite simple estimation problem from a set of compressed measurements.

[J10] studies the problem of estimating a sparse vector of parameters with unknown support. The interesting result appears when the cardinality of this support is assumed known. In this setting the Cramer Rao Lower Bound (CRB) of the estimate coincides with the minimum square error obtained by an estimator that has a priori knowledge about the positions of the non-zero entries (support) of the vector to estimate [B09]. It is easy to see that this bound is not achievable if this support cannot be determined from the measurements, e.g. in the low SNR regime. The Barankin Bound (BB) is also a lower bound on the unbiased estimation performance and in general it is tighter than the CRB. The interesting part is that it shows a behaviour similar to the Maximum Likelihood (ML) estimate of the parameter vector.

Sparse estimation Barankin Bound.

In the Fig. 1 it is shown the behaviour of the different bounds for a given problem instance. We can see that while the CRB is constant even for low SNR values, in this regime the BB rises due to the impossibility of completely determinig the support. We can also see that the ML estimator presents a similar behaviour, with the only difference of the higher SNR threshold where its MSE drops to the CRB.


M.A. Davenport, P.T. Boufounos, M.B. Wakin and R.G. Baraniuk Signal Processing With Compressive Measurements. Selected IEEE Journal of Topics in Signal Processing, April 2010.


A. Jung, Z. Ben-Haim, F. Hlawatsch, and Y. C. Eldar On unbiased estimation of sparse vectors corrupted by Gaussian noise  in Proc. 2010 IEEE International Conference on Audio, Speech and Signal Processing (ICASSP 2010), Dallas, TX, March 2010.


Z. Ben-Haim and Y. C. Eldar The Cramer–Rao bound for sparse estimation   IEEE Trans. Signal Processing, May 2009, submitted.

Labels: , , ,


Post a Comment

Note: Only a member of this blog may post a comment.

Subscribe to Post Comments [Atom]

<< Back to blog's front page.