RSS feedburner email feedburner Google reader Linked-in Facebook Twitter

Search. Loading...

Recent topics
[802.22]
[ICASSP 2011]
[TV-band access]
[Spectrum sensing]
[About CR patents]
[802.16h]


Resources
[Calls & conferences]
[Cognitive radio links]


Sites I follow
[openspectrum.info]
[Nuit Blanche]
[CRT blog]
[SpectrumTalk]
[Linda Doyle]


Archives
[01/2010]
[02/2010]
[03/2010]
[04/2010]
[05/2010]
[06/2010]
[07/2010]
[09/2010]
[10/2010]
[11/2010]
[12/2010]
[01/2011]
[05/2011]
[06/2011]
[07/2011]


Powered by Blogger


<< Back to blog's front page.


Jun 16, 2011

Some reading to keep up with Cognitive Radio

Cognitive Radio. In the last weeks I discovered some interesting articles related to the transition it is happening in the field of cognitive radio. It becomes apparent that the focus on cognitive radio is changing from theory to more practical and regulatory considerations. Here is my recommended reading:

The article "Cognitive radio: Ten years of experimentation and development" [P+11] by P. Pawelczak et al. makes an extensive review on the hardware and system development for cognitive radio. It describes the main implementation platforms and systems that can be used for testing and performance evaluation of theoretical algorithms related to cognitive radio. From the most popular GNU radio to other composite hardware/software frameworks. Some relevant conclusions:
  • There are practically no comprehensive CR demonstration platforms
  • Open SDR platforms dominate the research market
  • Many testbeds are not DSA in the strict meaning of the term
  • OFDM is tipically the design choice for waveforms
  • Energy detection is the most popular signal detection method
  • Geolocation and sensing are needed for maximum reliability but at a cost
  • Lack of appropiate RF front ends
  • Small and centralized systems are the design choice for most of the platforms
  • No dramatic increase in the number of available CR and network prototypes
  • Only one third of the presented demos are from the US
  • Universities dominate the demonstration market
  • More emphasis is needed in reporting failures
  • Each demonstration is developed by a small number of people
  • Absence of IEEE 802.22 demonstrations
On the other hand I include here two links. The first summarizes the state of the work involved in the ECC Report 159, while the second presents an interesting discussion on the intended mix of policy and technology at DySPAN conference.

Cognitive Radio in the ECC: Where we are now and where we are going
"The ECC set up a Project Team, PT SE43, to look at the compatibility issues between the relevant services using 'white spaces' in the UHF TV bands. After 19 months' work, SE43 delivered ECC Report 159 in January 2011. Its conclusions about the technical and operational requirements of WSDs are not so different from those of the FCC in the US (the FCC has already approved operators of databases for cognitive WSD devices). [...]"

Linda Doyle: Should we let DySPAN die?
"... the policy and technology mix is working as well as it could. At the plenary sessions the keynotes and papers reflect the mix as intended by the founders of DySPAN. But in the afternoons we split into policy and technology tracks. In the main policy people go to policy tracks and technology people go to technology tracks. [...]"

[P+11]

Pawelczak, P.; Nolan, K.; Doyle, L.; Ser Wah Oh; Cabric, D.; Cognitive radio: Ten years of experimentation and development. IEEE Communications Magazine, vol.49, no.3, pp.90-100, March 2011.

Labels: ,


Jun 6, 2011

Is the PHY Layer Dead?

Is the PHY Layer Dead?. Last post closed the series devoted to ICASSP 2011. Today I want to refer to an article I read some time ago, "Is the PHY Layer Dead?" [DH+10], coauthored by M. Dohler, R. W. Heath Jr., A. Lozano, C. Papadias and R. A. Valenzuela. The origins of this paper go back to a discussion held at IEEE VTC Spring 2009 about the relevance of current research in physical layer (PHY). The article is really interesting and worth reading to any researcher working in the field.

Some of the questions raised there can be particularized to cognitive radio. Here a couple of thoughts:

Cognitive radio research community has developed an extensive set of detectors for multiple system models. Have we achieved a detection performance close to what we can expect from a Cognitive Radio device?

As James Neel argued in one of his posts,
"that there’s waaay too many signal classification / detection papers and effort would be better spent on other aspects of learning about a radio’s environment."

In my opinion the answer is not so clear. First, in most practical detection problems there exists no clear performance limit that can be used as a reference for the available improvement margin. The optimal detector, given by the Neyman-Pearson detector, could in principle be used as a benchmark. However it is not implementable in the presence of nuisance parameters, and this its performance cannot be guaranteed to be achievable.

Second, in certain scenarios the analysis of "good performing" detectors, such as the GLRT, may offer insights in the information a learning algorithm requires. One simple example, if the GLRT detector is a function only of the largest eigenvalue of the empirical covariance matrix, this parameter is a good input to a learning algorithm. Hence the algorithm does not need to process the whole data set, what may be computationally unfeasible.


Cognitive radio community has focused mainly in clean and ideal problems, which conducted to an extensive set of algorithms and mathematical tools. Can these be translated to more sophisticated system problems, such as the ones one expect to find in real environments?

As Volkan pointed out, WiFi can always deal with simple scenarios. However when there are 570 Wi-Fi base stations operating in one room all these uncoordinated networks crash. In my opinion this "worst case" should be taken always into account when thinking about cognitive radio algorithms. Moreover, the empirical results using test-beds are so far quite limited and should be promoted.

These are just some ideas. Several other questions come to my mind, for example, if we are focusing too much in a specific application (why cognitive radio?), connections between academia and industry (is there already an industry around cognitive radio?)... what do you think?

[DH+10]

M. Dohler, R.W. Heath Jr., A. Lozano, C. Papadias, R.A. Valenzuela, Is the PHY Layer Dead? IEEE Communications Magazine, 2010.

Labels: , ,


Jun 1, 2011

More hints on compressed sensing at ICASSP 2011

ICASSP 2011.The last of this series of posts about ICASSP 2011 is about compressed sensing. Here's a bunch of compressed sensing related papers from different sessions:

"THE VALUE OF REDUNDANT MEASUREMENT IN COMPRESSED SENSING"; Victoria Kostina, Princeton University, US; Marco Duarte, Duke University, United States; Sina Jafarpour, Princeton University, US; Robert Calderbank, Duke University, US

The conference room was completely full during the presentation of this paper, in my opinion because of its evocative title. However the scenario studied in this work is much more particular as one could think from the title. The authors study the performance of a particular family of measurement matrices (weakly democratic) under the assumption that the CS quantizer has the choice to reject some measurements from the ones initially acquired. An overall bit budget for quantization is divided between (i) a set of bits to encode the set of indices for the rejected measurements, and (ii) the remaining bits that encode the values of the preserved measurements. Under these assumptions the paper concludes that throwing away certain measurements improves recovery SNR, i.e. it is better to have certain measurements quantized over a finer grid than a lot of coarse measurements.

"COMPRESSIVE SENSING MEETS GAME THEORY"; Sina Jafarpour, Princeton University, US; Volkan Cevher, Ecole Polytechnique Fédérale de Lausanne, Switzerland; Robert Schapire, Princeton University, US

Another paper with an evocative title. This work proposes a new reconstruction algorithm (MUSE) from compressed measurements corrupted with noise. This algorithm, which presents guarantees on the infinity-norm of the reconstruction error, can be formulated as a two-player game (and hence the title), which is equivalent to an alternating optimization scheme.


An interesting work on the fundamentals of estimation of parameters with underlying sparsity:

"PERFORMANCE BOUNDS FOR SPARSE PARAMETRIC COVARIANCE ESTIMATION IN GAUSSIAN MODELS"; Alexander Jung, Vienna University of Technology, Austria; Sebastian Schmutzhard, University of Vienna, Austria; Franz Hlawatsch, Vienna University of Technology, Austria; Alfred O. Hero III, University of Michigan, US

This paper studies the performance bounds on the problem of estimating the covariance matrix of a Gaussian random vector under the assumption that this covariance matrix can be modeled as a sparse expansion of known "basis matrices". The authors have derived lower bounds on the variance of both biased and unbiased estimators for a certain family of covariance matrices of interest. The analysis shows that in the low SNR regime the sparsity does not help, while in the high SNR regime the performance of an oracle estimator can be achieved. Between these two extreme cases the bound presents a transition phase polynomical in the SNR. This work studies a more involved scenario than the one by Ben-Haim et al. in their last year paper.


ICASSP will soon have to create a special session just for Yonina C. Eldar. This year she figures as coauthor of six papers. I won't go over all of them, just a couple of hints:

"SHANNON MEETS NYQUIST: CAPACITY LIMITS OF SAMPLED ANALOG CHANNELS"; Yuxin Chen, Stanford University, United States; Yonina C. Eldar, Technion / Israel Institute of Technology, Israel; Andrea Goldsmith, Stanford University, US

This preliminary work explores how capacity is affected by a sampling mechanisms below the channel's Nyquist rate. Under the Gaussianity assumption, the problem is formulated as a joint optimization over the input distribution and the receiver filter for a given set of uniform samplers. In the case of having a single receiver chain it is shown that the optimal transmission strategy avoids aliasing at the output, i.e. receiver filter not always corresponding to the classical matched filter. On the other hand, for multiple receiver chains sufficient conditions to obtain Nyquist capacity at a total rate equal to Landau rate are derived. My opinion is that the journal version of this work ("Shannon meets Nyquist: capacity limits of sampled analog channels", Y. Chen, Y. C. Eldar, and A. J. Goldsmith, in preparation) will give the first steps towards a rigorous analysis of the capacity achievable by compressed sampling schemes.

"SUB-NYQUIST SAMPLING OF SHORT PULSES"; Ewa Matusiak, University of Vienna, Austria; Yonina C. Eldar, Technion, Israel

This work applies the ideas of the modulated wideband converter to the time domain. Here it is assumed that the transmitted signal is composed by a series of narrow pulses with unknown shapes and unknown delays. In order to reduce the required sampling rate the proposed approach exploits the sparsity of the signal in the "Gabor frames" domain yielding to a similar result as in the case of the modulated wideband converter.

Labels: ,

<< Back to blog's front page.