RSS feedburner email feedburner Google reader Linked-in Facebook Twitter

Search. Loading...

Recent topics
[802.22]
[ICASSP 2011]
[TV-band access]
[Spectrum sensing]
[About CR patents]
[802.16h]


Resources
[Calls & conferences]
[Cognitive radio links]


Sites I follow
[openspectrum.info]
[Nuit Blanche]
[CRT blog]
[SpectrumTalk]
[Linda Doyle]


Archives
[01/2010]
[02/2010]
[03/2010]
[04/2010]
[05/2010]
[06/2010]
[07/2010]
[09/2010]
[10/2010]
[11/2010]
[12/2010]
[01/2011]
[05/2011]
[06/2011]
[07/2011]


Powered by Blogger


<< Back to blog's front page.


Apr 9, 2010

Performance evaluation in Cognitive Radio systems

Performance metrics.While performance evaluation is a key issue to compare and rank different cognitive radio systems, it has received a limited attention by the research community [Z09]. For example when I attended the ICASSP sessions related to cognitive radio I observed the lack of a common framework to rank the different algorithms.

Each author employs different assumptions on the cognitive node a priori knowledge, channel model, front-end characteristics, working environment... In the case of spectral sensing this problem is generally avoided by comparing the proposed algorithms with the very simple energy detector (and of course beating it). Other global algorithms are more complicated to evaluate since even the most simple cognitive network presents a cumbersome number of possible metrics (e.g. total throughput, maximum achievable sum rate at primary or secondary systems, power dissipated at a given node, probability of outage driven by secondary interference, spectral efficiency...).

This problem, probably common to other research areas, has difficult solution until a common framework for testing cognitive radio algorithms is developed. In this context Wireless @ Virginia Tech is developing an open source Cognitive Radio architecture:

The objective of the design is to develop a distributed & modular system that provides portability and interoperability between components developed in different programming languages, across different SDR and hardware platforms. [...] Users of CROSS can focus entirely on one aspect of the cognitive radio radio without developing or modifying components that have no direct relevance to their specific focus of research."

However, from the available documentation, I understand that the physical layer is limited to existing SDR components, and thus it is not useful for experiments that involve, for instance, a crompressive sampling frontend or sophisticated sensing algorithms.

In the same direction another Cognitive Radio Cognitive Network Simulator is being developed by Jing Zhong and Jialiang Li. I say in the same direction because it seems that it is a high level implementation of the cognitive radio network and it does not allow complex physical layer tweaks.

I have a keen interest in these network simulators since I've been recently working with performance evaluation of the game theoretical framework developed in [JVM10]. I found it difficult to determine the most relevant performance metrics.
[Z09]

Y. Zhao, S. Mao, J.O. Neel and J.H. Reed Performance Evaluation of Cognitive Radios: Metrics, Utility Functions, and Methodology. Proceedings of the IEEE, 2009.

[JVM10]

Sudharman K. Jayaweera, Gonzalo Vazquez-Vilar and Carlos Mosquera Dynamic Spectrum Leasing (DSL): A New Paradigm for Spectrum Sharing in Cognitive Radio Networks  accepted for publication in IEEE Transactions on Vehicular Technology, Jan 2010.

Labels: , ,

0 Comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Subscribe to Post Comments [Atom]

<< Back to blog's front page.