yar had hti h lkh

Upload: azamwandar

Post on 07-Jan-2016

22 views

Category:

Documents


0 download

DESCRIPTION

cvdhfg

TRANSCRIPT

  • Computer Networks 91 (2015) 196224

    Contents lists available at ScienceDirect

    Computer N

    journal homepage: www.else

    base

    oha

    alaysiaSMART: A SpectruM-Aware ClusteR-

    distributed cognitive radio networks

    Yasir Saleema,b,, Kok-Lim Alvin Yaua, Hazal MMubashir Husain Rehmanic

    a Faculty of Science and Technology, Sunway University, Selangor, MalaysiabWireless Network and Protocol Research Lab, MIMOS Berhad, Kuala Lumpur, Mc COMSATS Institute of Information Technology, Wah Cantt, Pakistana r t i c l e i n f o

    Article history:

    Received 8 March 2015

    Revised 15 July 2015

    Accepted 27 August 2015

    Available online 4 September 2015

    Keywords:

    Cognitive radio

    Clustering

    Routing

    Reinforcement learning

    Cluster merging

    Cluster splitting

    a b s t r a c t

    Cognitive radio (CR) is the

    censed users (or secondary

    in licensed spectrum whil

    This article proposes a Spe

    SUs to form clusters in a c

    search for a route to its de

    tic of CRNs is the dynamic

    activities) change as time g

    just the number of comm

    searches for a route on th

    reinforcement learning. Si

    icantly reduces interferen

    frequency, without signic

    1. Introduction

    Cognitive radio (CR) [1] is the next-generation wireless

    communication system that enables unlicensed or secondary

    users (SUs) to explore and use underutilized licensed spec-

    trum (or white spaces) owned by licensed or primary users

    (PUs) in order to improve the overall spectrum utilization

    without causing unacceptable interference to PUs. During the

    SUs communication, if a PU re-appears on a SUs channel, the

    SU must vacate the channel and switch to another available

    Corresponding author at: Faculty of Science and Technology, SunwayUniversity, Bandar Sunway, Petaling Jaya, 46150 Selangor, Malaysia. Tel.:

    +923337177553.

    E-mail address: [email protected], [email protected],

    [email protected] (Y. Saleem).

    http://dx.doi.org/10.1016/j.comnet.2015.08.019

    1389-1286/ 2015 Elsevier B.V. All rights reserved.etworks

    vier.com/locate/comnet

    d rouTing scheme for

    madb, Nordin Ramlib,next-generation wireless communication system that allows unli-

    users, SUs) to exploit the underutilized spectrum (or white spaces)

    e minimizing interference to licensed users (or primary users, PUs).

    ctruM-Aware clusteR-based rouTing (SMART) scheme that enables

    ognitive radio network (CRN) and enables each SU source node to

    stination node on the clustered network. An intrinsic characteris-

    ity of operating environment in which network conditions (i.e., PUs

    oes by. Based on the network conditions, SMART enables SUs to ad-

    on channels in a cluster through cluster merging and splitting, and

    e clustered network using an articial intelligence approach called

    mulation results show that SMART selects stable routes and signif-

    ce to PUs, as well as routing overhead in terms of route discovery

    ant degradation of throughput and end-to-end delay.

    2015 Elsevier B.V. All rights reserved.

    channel. IEEE 802.22, which is the rst worldwide wireless

    standard for CR, was dened in November 2004 [2].

    Routing nds a route from a source node to a destination

    node across a network. Routing in CRN is challenging due to

    several reasons. First, CRN is characterized by the dynamicity

    of channel availability due to different levels of PUs activi-

    ties, which vary the amount of white spaces. Second, multi-

    ple heterogeneous channels exist, therefore, it is challenging

    for SUs to select the most appropriate channels from a list

    of available channels. Third, the dynamicity of channel avail-

    ability causes the lack of a common control channel for con-

    trol information exchange in routing. Fourth, the availabil-

    ity of multiple heterogeneous channels and the dynamicity

    of channel availability can cause frequent channel switches

    by SUs, which can degrade the SUs network performance.

    Therefore, routing protocols for traditional wireless networks

    that maintain end-to-end paths, such as ad-hoc on-demand

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 197distance vector (AODV) routing protocol, cannot be directly

    applied in CRNs because they do not consider the challenges

    of routing in CRNs and may cause high routing overhead due

    to the constant ooding of routingmessages [3]. Routing pro-

    tocols for CRNs must address the challenges of CRNs and be

    spectrum-aware [4,5], so that routes are stable and SUs can

    perform data communication for longer period of time with-

    out much disruptions, as well as minimize interference to

    PUs.

    Routing protocols can be cluster-based which run over a

    clustered network [6,7]. Clustering, which is a topology man-

    agement mechanism, has traditionally been applied in wire-

    less networks to organize nodes into logical groups in order

    to provide network scalability and stability. Each cluster is

    comprised of four kinds of nodes, namely clusterhead, mem-

    ber node, relay node and gateway node. A clusterhead serves

    as a local point of process for various applications, such as

    channel sensing and routing. Amember node associates itself

    with a clusterhead. Clusterhead and member nodes commu-

    nicate regularly among themselves using a common channel,

    and these are called intra-cluster communications. The com-

    mon channel is available to all member nodes of a cluster.

    A relay node is a member node that provides connectivity

    to a member node which is located out of the range of its

    clusterhead. A gateway node, which is also a member node

    located at the fringe of a cluster, can hear from neighboring

    cluster(s), and so it provides inter-cluster communications.

    The clusterheads and gateway nodes form a backbone to the

    SU destination node. The number of hops between a mem-

    ber node and a clusterhead in a cluster may be a single [8],

    two or more. Popular traditional clustering algorithms, such

    as lowest ID [9] and maximum node degree [10], may not be

    suitable in CRNs due to the dynamicity of channel availabil-

    ity and the presence of multiple channels. Among nodes in a

    single-hop ormulti-hop neighborhood, the lowest ID cluster-

    ing algorithm selects a nodewith the lowest ID as the cluster-

    head; while the maximum node degree clustering algorithm

    selects a node with the highest number of neighbor nodes as

    the clusterhead.

    Cluster size, which represents the number of nodes in a

    cluster, affects various performance metrics. Larger cluster

    size reduces routing overhead, such as route request (RREQ)

    and route reply (RREP), since the ooding of routing over-

    heads only involves clusterheads and gateway nodes along

    a backbone, as well as reduces error probability in the nal

    decision on channel availability since this decision is made

    based on channel sensing outcomes collected from higher

    number of nodes in a cluster. Smaller cluster size (or larger

    number of clusters in a network) increases the number of

    common channels, and hence the connectivity among nodes

    in a cluster, because physically close nodes may share a sim-

    ilar set of available channels. Since clusters may use differ-

    ent common channels, the contention and interference lev-

    els in the network can be reduced, and this subsequently

    improves the routing performance. Higher number of com-

    mon channels in a cluster minimizes the occurrence of re-

    clustering due to the improved connectivity among nodes

    in a cluster. While achieving larger cluster size may seem to

    be more favorable in traditional distributed networks in or-

    der to improve scalability, the same cannot be said for CRNssince achieving smaller cluster size improves stability andaddresses the intrinsic characteristics of CRNs, particularly

    the dynamicity of channel availability. In our work, the clus-

    ter size is adjusted through cluster maintenance (i.e., cluster

    merging and splitting) based on the network performance

    brought about by routing, which is dependent on network

    conditions (i.e., PUs activities) that change as time goes by, so

    that a cluster fullls the requirement on the number of com-

    mon channels in a cluster to improve scalability and stability.

    The readers are referred to [1113] on how to determine the

    optimal cluster size.

    Cluster-based routing is preferred in CRN for the follow-

    ing reasons. First, it provides network scalability by reducing

    the ooding of routing control messages. Second, it provides

    network stability by reducing the effects of dynamic condi-

    tions in the network (i.e., PUs activities), since the changes

    affect the network at the cluster level, so only local update

    is required instead of whole network reconguration. Third,

    it supports cooperative tasks (e.g., routing and channel sens-

    ing)which are essential to CR operations. For example, a clus-

    terhead collects channel sensing outcomes from its member

    nodes and subsequently makes a nal decision on channel

    availability. This improves the accuracy of channel availabil-

    ity decision as compared to the decision made by a single

    node [14].

    Reinforcement learning (RL) [15] is an articial intelli-

    gence approach that enables a decision maker to observe its

    state and reward, learn, and subsequently carry out a proper

    action (i.e., route selection) so that the state and reward,

    which are the consequences of the action, improve in the

    next time instance. RL has been applied to routing [16,17],

    and its advantages are as follows:

    Instead of tackling every single factor that affects the net-work performance, RL models the network performance

    that covers a wide range of factors in the operating en-

    vironment or network conditions affecting the network

    performance (i.e., the channel utilization level by PUs

    and channel quality); hence, it adopts a simple modeling

    approach.

    Prior knowledge of the operating environment or net-work conditions is not necessary; and so a SU can learn

    about the operating environment as time goes by.

    This article presents SpectruM-Aware clusteR-based

    rouTing (SMART), which is a cluster-based routing scheme,

    that applies RL to select stable routes, as well as to maximize

    SUs routing and clustering performances including SUs in-

    terference to PUs and the number of common channels in

    a cluster without signicant degradation of SUs throughput

    and end-to-end delay.

    SMART provides three main contributions imperative to

    CRNs as follows:

    C.1 SMART maximizes the utilization of white spaces in

    order tomaximize SUs network performance. This can

    be achieved through adaptation to the dynamicity of

    network conditions (i.e., PUs activities).

    C.2 SMART aims to fulll the requirement on the mini-

    mum number of common channels in a cluster, which

    enhances cluster stability through improved connec-

    tivity among nodes in a cluster, in order to increase theavailability of at least a single common channel in a

  • 198 Y. Saleem et al. / Computer Networks 91 (2015) 196224

    er strucFig. 1. A clust

    cluster, while leaving the rest of the common channels

    as backups. A common channel is essential for mem-

    ber nodes to send data packets to their respective clus-

    terheads eciently so that clusterheads do not switch

    channels constantly. A group of geographically adja-

    cent SUs tend to share a similar set of available chan-

    nels, so a smaller cluster size increases the number of

    common channels in a cluster [18]. Higher number of

    common channels in a cluster prevents frequent clus-

    ter splitting as a result of the re-appearance of PUs

    activities.

    C.3 SMART solves one of the main problems of broad-

    casting using a single transceiver in CRNs, which re-

    quires a SU to send a similar packet in various chan-

    nels so that all neighboring SUs can receive the packet.

    In SMART, using a single transceiver, a clusterhead

    knows about its neighboring clusters through gate-

    way nodes. First, it broadcasts routing control mes-

    sages (i.e., RREQ) using its operating channel, which

    is common in its cluster. Second, gateway nodes that

    receive the routing control messages forward them to

    their respective neighboring clusters using the operat-

    ing channel of the neighboring clusters. Thus, the clus-

    terhead and gateway nodes are aware of the operating

    channels of neighboring nodes, which allow them to

    broadcast eciently without broadcasting on all the

    available channels.

    The rest of this paper is organized as follows. Section 2presents an overview of SMART. Section 3 presents related

    work. Section 4 presents system model. Section 5 presents

    SMART clustering scheme. Section 6 presents SMART rout-

    ing scheme. Section 7 presents message overhead and time

    complexity of SMART. Section 8 presents performance eval-

    uation, results and discussion. Section 9 presents conclusion

    and future work.

    2. An overview of SMART

    Fig. 1 shows an example of a cluster structure in which

    SU nodes in a CRN are grouped. The SU nodes transmit their

    data to a base station (BS) in a distributed multi-hop man-

    ner. There are three clusters (i.e., C1, C2 and C3) in which

    clusterheads are CH1, CH2 and CH3, member nodes areMN1,1,

    MN2,1, MN3,1 etc., relay node is RN1,2, and gateway nodes are

    GN1,3,2 and GN1,1,2. Member nodesMN1,1,MN2,1 andMN3,1 areture in a CRN.

    associated with clusterhead CH1 in cluster C1. Relay node

    RN1,2 provides connectivity to member node MN1,2 toward

    clusterhead CH2 in cluster C2. Gateway node GN1,3,2 is asso-

    ciated with clusterhead CH3, and it provides two-hop inter-

    cluster connectivity from CH3 to neighboring clusterhead

    CH2. As another example, gateway node GN1,1,2 is associated

    with clusterhead CH1, and it provides three-hop inter-cluster

    connectivity from CH1 to neighboring clusterhead CH2. Mem-

    ber nodes MN2,1 and MN3,1 send data packets to destination

    SU BS through backbone CH1GN1,1,2GN1,2,1CH2GN2,2,3

    CH3GN2,3, BS.

    SMART provides two-fold functions: clustering and rout-

    ing. Clustering aims to form clusters that fulll the require-

    ments on the number of common channels in a cluster (see

    C.2 in Section 1) and allow nodes to broadcast routing control

    messages ecientlywithout broadcasting on all the available

    channels (see C.3 in Section 1). It consists of cluster mainte-

    nance which adjusts the number of common channels in a

    cluster through cluster merging and cluster splitting mech-

    anisms, which have not been investigated in the context of

    CRNs. In cluster merging, a cluster is reduced from the net-

    work in which the clusterhead relinquishes its role. Subse-

    quently, a new clusterhead is selected and themember nodes

    of the two original clusters join the newly created cluster. In

    cluster splitting, a new cluster is formed in the network in

    which an existing member node of a cluster is selected to

    serve as a clusterhead and a common operating channel is

    selected. Subsequently, existing member nodes of the cluster

    may join this newly-formed cluster. Next, routing searchesfor a route from a SU source node to a SU destination node

    on the clustered network using RL that maximizes the uti-

    lization of white spaces (see C.1 in Section 1).

    Cluster maintenance and routing mechanisms can best

    be explained using an example. In Fig. 1, suppose, mem-

    ber nodes MN2,1 and MN3,1 send data packets to destination

    SU BS through backbone CH1GN1,1,2GN1,2,1CH2GN2,2,3

    CH3GN2,3,BS. Due to the dynamicity of network conditions

    (i.e., PUs activities), clusterhead CH1s network performance

    (i.e., packet loss rate) deteriorates due to the lack of a com-

    mon channel in the cluster, and so it must be adaptive to

    the changing environment (see C.1 in Section 1). Cluster C1undergoes cluster splitting and a new cluster C4 is formed

    as shown in Fig. 2. Subsequently, clusterhead CH1 may use

    existing route or search for a new route, while the newly-

    formed clusterhead CH4 must search for a new route. In

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 199

    g. ClusteFig. 2. A cluster structure after cluster C1 in Fig. 1 undergoes cluster splittin

    new routes to destination SU BS.

    Fig. 2, upon completion of the routing phase, clusterhead CH1uses existing route CH1GN1,1,2GN1,2,1CH2GN2,2,3CH3

    GN2,3,BS to its destination SU BS; and clusterhead CH4 uses

    a new route CH4GN1,4,2GN3,2,4CH2GN2,2,3CH3GN2,3,BS.

    Note that, clusters C1 and C4 may use different common

    channels and so the contention and interference levels in the

    network can be reduced compared to a single cluster C1 in

    Fig. 1, which improves network performance.

    3. Related work

    This section presents related work on cluster-based rout-

    ing schemes and the application of RL in CRNs.

    3.1. Cluster-based routing schemes

    While there have been a large number of separate inves-

    tigations into clustering [19,20] and routing [2123], there is

    only a perfunctory attempt to investigate cluster-based rout-

    ing schemes in CRNs.

    Huang et al. [6] propose a cluster-based routing scheme.

    The cluster formation procedure uses several clustering met-

    rics, namely the node degree level (or the number of neigh-

    bor nodes), the average number of hops in a cluster, and

    the average number of channel switches due to distinctive

    channels selected by neighboring nodes. Particle swarm opti-

    mization enables clusterheads to select common channels for

    inter-cluster communications. Subsequently, routing selects

    a route with the highest availability probability, which is theproduct of the link availability probability along the route.

    This approach has been shown to achieve higher throughput,

    as well as lower number of clusters in the network and end-

    to-end delay.

    Talay and Altilar [7] propose another cluster-based rout-

    ing scheme. First, the cluster formation procedure uses sev-

    eral clustering metrics, namely a set of available channels, as

    well as physical location and velocity of each node, to select

    nodes for a cluster. Second, a clusterhead is selected based on

    aweighted clusteringmetric that takes into account the node

    degree and mobility levels, as well as the set of the available

    channels of each node. The number of member nodes in a

    cluster must not exceed a pre-dened value. Subsequently,

    routing selects a route that increases connectivity (or reduces

    the effects of PUs activities) and reduces interference lev-

    els among SUs (i.e., reduces signal-to-interference-and-noiser C1 and the newly-formed cluster C4 either use existing route or search for

    ratio (SINR) and expected transmission time (ETT)). This ap-

    proach has been shown to achieve higher packet delivery

    ratio and throughput, as well as lower delay and routing

    overhead.

    SMART formulates and solves the cluster-based rout-

    ing scheme using RL, and it provides further enhancements

    to [6,7] in two main aspects, particularly cluster size ad-

    justment and cluster maintenance. Cluster size adjustment

    changes the cluster size based on the requirements on the

    number of common channels in a cluster; while cluster

    maintenance is comprised of cluster merging and splitting.

    3.2. Reinforcement learning in CRNs

    Reinforcement learning has been widely used in CRNs for

    different applications, such as the detection of spectral re-

    sources [24], solving the problems of interference to PUs [25],

    auction of dynamic spectrum access [26], power control [27],

    routing [16,17], etc.

    Galindo-Serrano and Giupponi [25] solve the problem of

    interference to PUs by SUs using a multiagent RL approach,

    known as decentralized Q-learning. An IEEE 802.22 standard-

    based CRN is considered by representing SU base stations as

    multiple agents. A scenario with partial information about

    the operating environment is considered in which the pro-

    posed system learns near-optimal policy which has lower

    convergence rate; while providing the implementation ben-

    ets of deployment and feasibility. RL has been shown to im-

    prove the outage probability.Teng et al. [26] apply RL in an auction-based dynamic

    spectrum access scheme in a CRN with one PU and multiple

    SUs. SUs use an RL approach called Q-learning to determine

    their respective bidding prices intelligently and dynamically,

    and compete for spectrum access. The bidding price for dif-

    ferent channels are different based on the current and future

    expected packet transmissions to secure the spectrum oppor-

    tunities. The PU releases the unused channels to SUs with the

    maximal bids. RL has been shown to improve bidding e-

    ciency and reduced packet loss.

    Zhou et al. [27] design a distributed power control al-

    gorithm for CRNs using RL with low implementation com-

    plexity. The proposed algorithm does not require power and

    interference information of SUs. SUs control their transmis-

    sion power by observing and learning the interference level

    based on the transmission rates and feedback signals of PUs,

  • 200 Y. Saleem et al. / Computer Networks 91 (2015) 196224

    the underlying channel sensing module in the data link layer.

    Information exchanged among the layers and environment

    shown, is relevant to SMART. Basically, there are two main

    functions offered by the channel decision module, namely

    channel selection and channel switching. Channel selection

    enables a SU to select one of the available channels for trans-

    missionwhile fullling the channel selection criteria. This re-

    search applies a joint channel and route selection approach

    which has been shown to increase route stability and en-

    hance network performance [1]. Channel switching enables a

    SU to cease transmission in a channel upon the detection of

    PUs activities and switch its transmission to another avail-

    able channel with white spaces, which is determined by the

    channel selection module in order to minimize interruption

    to PUs transmissions. To initiate a channel switch, the chan-

    nel decision module sends an indication signal, which con-

    sists of the next operating channel, to the channel hand-off

    module in the physical layer.

    Data link layer. Each SU has a single network interface for

    data and control packet transmissions. There are two main

    components in the data link layer. Channel sensing module

    detects white spaces and determines the channel utilization

    level by PUs in the sensing channels through interacting with

    PUs activities module in the external environment. Channel

    sharing module enables distributed channel access among

    SUs in a shared wireless environment. Medium access con-

    trol (MAC) protocol, namely IEEE 802.11, is applied to coor-

    dinate transmissions among SUs in a particular channel so

    that the SUs transmissions do not interfere with other SUswhich are received in the previous time instance. RL has been

    shown to achieve greater spectrum eciency by minimizing

    SUs interference to PUs.

    Two routing schemes that apply RL to CRNs have been

    proposed by Xia et al. [16] and Al-Rawi et al. [17]. Xia et al.

    [16] apply a dual Q-routing model to reduce end-to-end de-

    lay of SUs by choosing a route with higher number of avail-

    able channels. This reduces the delay incurred in negotiation

    of a common channel for data transmission among SUs. Each

    SU learns about the accumulated number of available chan-

    nels a route leading to its SU destination node using a dual Q-

    routing model and selects a SU neighbor node with the high-

    est number of common channels. Al-Rawi et al. [17] apply a

    Q-routing model that chooses a route with lower accumu-

    lated link-layer delay.

    To the best of our knowledge, the application of RL to

    cluster-based routing in CRNs is novel in its approach. SMART

    is different from existing RL-based routing approaches for

    CRNs [16,17] in a way that existing routing schemes select

    routes based on the number of available channels [16] and

    link-layer delay [17], while SMART selects stable routes that

    have channels with higher OFF-state probability in order to

    minimize frequent disruptions and channel switches as a

    result of the re-appearance of PUs activities during data

    communication. Most importantly, SMART is a novel joint

    channel selection and cluster-based routing scheme, and so it

    extends existing RL-based routing schemes that assume that

    channel selection is readily available [16,17].

    4. Systemmodel

    We consider a distributed ad hoc CRN consisting of static

    SUs and PUs. A SU can be any device equipped with CR func-

    tionality so that it can switch to any of the available chan-

    nels. Each SU must minimize interference to PUs, and so

    it performs CR functions, including channel sensing, chan-

    nel selection, channel sharing and channel hand-off in a

    distributed manner. The CRN is comprised of SU n N ={1,2,3, . . . , |N|}, PU j J = {1,2,3, . . . , |J|}, and channelk K = {1,2,3, . . . , |K|} where |N|, |J| and |K| represent thenumber of SUs, PUs and channels, respectively. Jk J repre-sents a set of PUs j Jk that use channel k.

    The rest of this section presents our CRN architecture (see

    Fig. 3) comprised of internal and external environments. We

    incorporate CR functions into QualNet which is lack of CR en-

    vironment. The CRN architecture allows this research to fo-

    cus on cluster-based routing, rather than the underlying CR

    functions, such as channel sensing and channel sharing.

    4.1. Internal environment

    The components in the internal environment can be cat-

    egorized into three main layers (i.e., network, data link and

    physical layers) and a cross-layer repository. The network

    layer includes channel decision and routing protocol. The

    data link layer includes channel sensing and channel sharing.

    The physical layer includes channel hand-off.

    Network layer. The network layer contains a cluster-based

    routing protocol called SMART, which is a joint channel deci-

    sion and route selection protocol (see Section 6). Channel de-cision module receives information about white spaces fromFig. 3. CRN architecture.activities.

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 201

    es for |K

    5

    0.5

    1

    riod

    >

    >Table 1

    Exponential distribution parameter valu

    Rate Channel

    1 2 3 4

    kON, j

    1.25 0.4 1 0.4

    kOFF, j

    0.67 2 1 0.33

    Table 2

    Types of PUs activities.

    PU activity ON Period OFF Pe

    Long-term Long Long

    High Long Short

    Low Short Long

    Intermittent Short Short

    Physical layer. The channel hand-off module enables a SU

    to vacate its current operating channel and switch to another

    one. This module is invoked by an indication signal, which

    consists of the next operating channel, from the channel de-

    cision module in the network layer. Any transmission must

    be stopped during a hand-off operation.

    Cross-layer repository. The cross-layer repository enables

    the network, data link and physical layers to share informa-

    tion. An example is the information of the next operating

    channel shared from the channel decision module in the net-

    work layer to the channel handoff module in the physical

    layer.

    4.2. External environment

    The external environment is characterized by multi-

    channel environment, PUs activities and channel quality.

    Multi-channel environment. The operating environment

    consists of a list of licensed channels which are used as data

    channels by SUs. The SUs exploit the white spaces in these

    channels in an opportunistic manner. Note that, for each link

    between a SU node pair, the channels have the same amount

    of link capacity, although they may have different amount of

    white spaces.

    PUs activities. The PUs activities module generates PUs

    trac in each channel according to a PUs trac model. ThePUs activities are independent and identically distributed

    across the available channels. The ONOFF transitions of PU

    activity for PU j Jk using channel k follows a Poisson modelin which ON and OFF periods, namely Tk

    ON, jand Tk

    OFF, j, are ex-

    ponentially distributed with rates kON, j

    and kOFF, j

    , respec-

    tively [28,29]. We assume that all PUs activities in a channel

    k are represented by TkON

    and TkOFF

    , and each PU j uses a single

    channel k only [3032].

    In [32], the values of rates kON, j

    and kOFF, j

    for |K| = 10 li-censed channels have been measured by collecting samples

    of channel state transitions, and they are shown in Table 1.

    There are four kinds of PUs activities with different rate

    values kON, j

    and kOFF, j

    [33], namely, long-term, high, low

    and intermittent PUs activities. These PUs activities are pre-

    sented in Table 2, as well as investigated in Section 8.3.1.

    Channel quality. Channels in CRNs are heterogeneous

    in nature depending on the amount of white spaces| = 10 licensed channels.

    6 7 8 9 10

    2 1 0.18 0.5 0.67

    0.29 0.25 2 1.33 0.5

    kON, j

    kON, j

    Channels in Table 1

    1 1 3, 4, 5, 7, 101 >1 2, 8, 9

    1 1 1, 61 >1

    (or PUs activities). The channel quality indicates the amount

    of white spaces in a channel in terms of the probability of a

    channel to remain in OFF-state in the next time slot calcu-

    lated using channel capacity metric (see Eq. (1)).

    5. SMART: clustering

    In this section, we present SMART clustering algorithms.

    We begin with a description of the channel capacity metric

    in Section 5.1, followed by packet structure in Section 5.2.

    Subsequently, we present cluster formation in Section 5.3,

    gateway node selection in Section 5.4, cluster merging in

    Section 5.5 and cluster splitting in Section 5.6. For clar-

    ity, Table 3 shows the notations used in the clustering

    scheme. SMART is also presented using three separate ex-

    amples. Section 5.5.2 presents a cluster merging example.

    Section 5.6.2 presents a cluster splitting example.

    5.1. Channel capacity metric

    The channel capacity metric is based on maximum like-

    lihood estimation (MLE), which is dened as the probability

    of a channel being in the OFF state at time t. In SMART, chan-

    nel capacity metric is used to rank the available channels in

    clustering and routing. The metric is as follows [32,34]:tk =kON, j

    kON, j

    + kOFF, j

    +kOFF, j

    kON, j

    + kOFF, j

    e(kON, j

    +kOFF, j

    )t(1)

    A channel k of PU j with higher probability tkat time t has

    higher amount of white spaces, and so it is given a higher

    rank. The channel capacity metric, which is applied in clus-

    tering and routing mechanisms, helps to maximize the uti-

    lization of white spaces (see C.1 in Section 1).

    5.2. Packet structure

    Each node (i.e., non-clustered node, clusterhead and

    member node) exchanges clustering message CHinfoamong themselves. The CHinfo is embedded in a Hellomessage, and so it is exchanged periodically or when nec-

    essary during cluster formation, cluster merging and cluster

    splitting. In addition to CH (node ID of a new clusterhead),merge (merge = 1 indicates cluster merging is initiated),

  • 202 Y. Saleem et al. / Computer Networks 91 (2015) 196224

    et by a n

    lusterhe

    ging

    tting

    tered (N

    r

    at a no

    channe

    ster

    nel of a

    l of a cluTable 3

    Notations used in clustering.

    Notation Description

    CHinfo Clustering information sCHinfo.CH Identication of a new cCHinfo.merge Indication of cluster merCHinfo.split Indication of cluster spliCH Clusterhead of a clusternodeID Identication of a nodenodeState State of a node: non-clusclusterID Identication of a clustelistChannels List of available channelslistNodes List of nodes in a clustercommonChannels List of available commonclusterSize Number of nodes in a clumasterChannel Common operating chanbackupChannel Backup common channe

    Table 4Notations of messages used in clustering mechanisms.

    Category Notation Description

    JREQi, j Cluster joining request message from

    Cluster joining JACCi, j Cluster joining acceptance message f

    JDECi, j Cluster joining decline message from

    JRECi, j, k Cluster joining recommendation mes

    join neighboring clusterhead k

    GREQi, j, k Gateway role request message from m

    Gateway node selection GACCi, j, k Gateway role acceptance message to

    GDECi, j, k Gateway role decline message to mem

    MREQi, j, k Cluster merging request message fro

    clusterhead k

    MACCi, j, k Cluster merging acceptance message

    Cluster merging MDECi, j, k Cluster merging decline message to g

    MCANi, j, k Cluster merging cancellation messag

    cluster k

    RELj, k Clusterhead role relinquishment mes

    existing clusterhead j

    LREQi, j, k Relay request message from requesti

    Relay node selection LACCi, j, k Relay acceptance message from mem

    LINFi, j, k Relay notication message from relay

    Tw,scan Scanning interval of a channel for rec

    Timers Tw,res Waiting interval for a response from

    Tw,CHE Waiting interval for clusterhead elect

    and split (split = 1 indicates cluster splitting is initi-ated) in clustering message CHinfo, other clustering infor-mation is included so that the clusterhead can calculate clus-

    tering metric and make decision on cluster merging, cluster

    splitting, the selection of a new clusterhead, and the relin-

    quishment of an existing clusterhead. Note that, only clus-

    terheads can set values for CH, merge and split, althoughall nodes (including the clusterheads) include the rest of the

    information in CHinfo. As an example, an existing cluster-head indicates the node ID of a new clusterhead (or the new

    cluster ID) using CHinfo.CH. As another example, the clus-tering information CHinfo.merge = 1 indicates that theclusterhead initiates cluster merging. The packet structure of

    CHinfo is presented in Fig. 4.Notations used in clustering with their descriptions are

    summarized in Table 3. A summary of messages, including

    notations and descriptions, used to coordinate clusters are

    presented in Table 4. Table 5 describes the notations used in

    clustering algorithms.ode

    ad

    N), clusterhead (CH), or member node (MN)

    de

    ls in a cluster

    cluster

    ster when master channel becomes unavailablenode i to neighboring clusterhead j

    rom neighboring clusterhead j to node i

    neighboring clusterhead j to node i

    sage from own clusterhead j for recommending its member node i to

    ember node i to its own clusterhead j for neighboring cluster k

    member node i from its own clusterhead j for neighboring cluster k

    ber node i from its own clusterhead j for neighboring cluster k

    m gateway node i to its own clusterhead j and neighboring

    to gateway node i from clusterhead j for merging with cluster k

    ateway node i from clusterhead j for merging with cluster k

    e from gateway node i to its own clusterhead j and neighboring

    sage from a new clusterhead k (or existing gateway node) to an

    ng member node i to another member node j for its clusterhead k

    ber node j to requesting member node i for its clusterhead k

    node j to its clusterhead k about requesting member node i

    eiving CHinfo from neighboring nodesa clusterhead

    ion

    5.3. Cluster formation

    All SUs are in non-clustered state at the initial stage.

    Cluster formation creates logical groups (or clusters) con-

    sists of clusterheads and member nodes. Initially, each SU

    scans each of the available channels for a short time du-

    ration Tw,scan during which a node may receive clustering

    message CHinfo from its neighboring nodes (e.g., clusteredand non-clustered nodes), and maintain its neighbor table.

    The ow chart for cluster formation is presented in Fig. 5.

    Algorithm 1(a) presents cluster formation procedure at non-

    clustered node NNi and it is explained in Sections 5.3.1 and

    5.3.2.

    5.3.1. Node joining

    Node joining is the process of associating a non-clustered

    node with a cluster. SMART fullls the availability of a cer-

    tain number of common channels in a cluster upon node

    joining in order to maximize stability (see C.2 in Section 1).

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 203

    -cluster

    y betwe

    om clus

    i

    channelTable 5

    Notations used in clustering algorithms.

    Notation Description

    NNi Non-clustered node i

    CHj Clusterhead j

    MNi, j Member node i of clusterhead j

    GNi, j, k Gateway node i of clusterhead j that provides inter

    RNi, j Relay node i that provides intra-cluster connectivit

    NCHinfo.nodeState=CH Number of clustering message CHinfo received frMCCHi Master channel of clusterhead i

    NBi SU neighbor nodes of SU i

    NNlNBi Non-clustered SU neighbor nodes of SU iNNNlNBi

    Number of non-clustered SU neighbor nodes of SU

    NNCi ,k Number of nodes that belongs to cluster Ci , having tCH, j

    Rank of clusterhead j

    tchan,k

    Rank of channel k

    nc,Cj Number of common channels in a cluster Cj

    nc,Ci, j Number of common channels among clusters Ci and CjHC,min Threshold for the minimum number of common channels

    HC,merge Cluster merging threshold for the minimum number of com

    tk

    Channel capacity for channel k

    NlistChannelsi Number of available channels of SU i

    Hi,Cl Number of hops between SU i and neighboring cluster l

    Fig. 4. CHinfo packet structure.

    The node joining process is illustrated in Algorithms 1(a) and

    1b. In part I of Algorithm 1(a), a SU i scans the list of avail-

    able channels in a sequential manner, and each channel is

    scanned for Tw,scan duration. Upon scanning all the available

    channels in the list listChannelsi, if a SU receives CHinfo, itstores the sender of CHinfo and the respective informationin its neighbor table.

    In part II of Algorithm 1(a), there are two circumstances

    in which a SU chooses to join a clusterhead. First, a SU has

    received clustering message CHinfo from a single cluster-head (i.e., NCHinfo.nodeState=CH = 1), and so this clusterheadis chosen. Second, a SU has received more than one clus-

    tering message CHinfo from multiple clusterheads (i.e.,NCHinfo.nodeState=CH > 1), then it ranks the master channels of

    the clusterheads based on channel capacity metric tk(see

    Eq. (1) in Section 5.1).connectivity between its own clusterhead and neighboring clusterhead k

    en member node j and its own clusterhead

    terheads

    k in their list of available channels

    in a cluster

    mon channels required for cluster merging

    The clusterheads are ranked such that a clusterhead j has

    the highest rank (i.e., tCH, j

    = 1) if its master channel k hasthe highest channel capacity among the channel capacities

    of master channels of other neighboring clusterheads (i.e.,

    tk> t

    lMCCHmNBi). Similarly, other clusterheads are ranked

    as second, third and so on. Finally, the node selects a clus-

    terhead jwith the highest rank (i.e., tCH, j

    = 1).Next, in both circumstances, a SU i sends a cluster joining

    request ( JREQ i, j) to the selected clusterhead j, and waits for

    its response within a time duration Tw,res. If a SU i receives

    an acceptance response ( JACCi, j) from clusterhead j, it be-

    comes the member nodeMNi, j of the respective cluster j (i.e.,

    nodeStatei MNi, j); otherwise, the next clusterheadwiththe highest rank using its master channel is chosen.

    Next, we focus on the circumstance in which a cluster-

    head CHj receives JREQ i, j message from a non-clustered nodei, as shown in Algorithm 1(b). The clusterhead CHj only ac-

    cepts a joining request by sending back cluster joining accep-

    tance ( JACCi, j) message if the number of common channels

    nc,Cj in its cluster Cj fullls the threshold for the minimum

    number of common channels in a cluster (i.e., nc,Cj HC,min)upon node joining in order to maximize cluster stability (see

    C.2 in Section 1). Otherwise, it declines the joining request by

    sending back cluster joining decline ( JDECi, j) message to the

    SU i.

    5.3.2. Clusterhead election

    SMART uses a clustering metric that elects a node with

    the highest number of available channels as clusterhead

    during clusterhead election in order to avoid frequent re-

    election. A clusterhead also processes cluster joining request

    from non-clustered nodes. In SMART, a clusterhead must en-

    sure that its cluster fullls the requirement on the minimum

    number of common channels upon any new node joining

    in order to maximize cluster stability (see C.2 in Section 1).

    In part III of Algorithm 1(a), a SU i may not receive clus-

    tering message CHinfo from any clusterhead if there islack of clusterhead in its neighborhood, and so it remains in

  • 204 Y. Saleem et al. / Computer Networks 91 (2015) 196224

    Algorithm 1(a) Cluster formation procedure at non-

    clustered node NNi.

    1: /* Part I: Scan each available channel in order to receive clus-

    tering message CHinfo */2: while listChannelsi do

    3: Scan each available channel k for Tw,scan duration;

    4: if receive CHinfo then5: Store CHinfo;6: end if

    7: end while

    8: /* Part II: Process CHinfo received from clusterhead(s) */9: if NCHInfo.nodestate=CH = 1 then10: Send JREQi, j to CHj;

    11: else if NCHInfo.nodestate=CH > 1 then12: for k in CHinfo.nodeID.masterChannel do13: Calculate t

    kusing Eq.(1)

    14: end for

    15: Update tCH, jNBi such that

    tCH, j

    > tCH,lNBi if

    tMCj

    >

    tMCl

    ;

    16: while not receive JACCi, j orCHlCHinfo.nodeState=CH = do

    17: Send JREQi, j to CHj| tCH, j > tCH,lNBi ;18: Wait Tw,res;

    19: if receive JACCi, j from CHj then

    20: nodeStatei MNi, j;21: break;

    22: end if

    23: end while

    24: /* Part III: Process CHinfo received from non-clusterednode(s) */

    25: else if NNNlNBi= 0 or NlistChannelsi > NlistChannels jNNlNBi

    then

    26: nodeStatei CHi;27: for k in listChannelsi do

    28: Calculate tkusing Eq.(1)

    29: end for

    30: Update tchan,k

    such that tchan,k

    > tchan,m

    if tk>

    tm|k listChannelsi andm listChannelsi;31: masterChannel = k| t

    chan,k= 1;

    32: backupChannel = k| tchan,k

    = 2;33: Broadcast CHinfo;34: else

    35: Wait Tw,CHE;

    36: if receive CHinfo from CHj then37: Send JREQi, j to CHj;

    38: else

    39: Run Algorithm 1(a);

    40: end if

    41: end if

    Algorithm 1(b) Cluster formation procedure at clusterhead

    CHj.

    1: Receive JREQ i, j from non-clustered node i;

    2: if nc,Cj HC,min after node i joins cluster Cj then3: Send JACCi, j;

    4: else

    5: Send JDECi, j;

    6: end if

    non-clustered state. It starts to form a cluster with non-

    clustered SU neighbor nodes NNlNBi .There are two circumstances. First, there is lack of non-

    clustered SU neighbor nodes of SU i (i.e., NNNlNBi= 0), and

    so SU i forms a cluster itself and becomes a clusterhead

    (i.e., nodeStatei CHi). Second, there is at least a singlenon-clustered SU neighbor node (i.e., NNNlNBi

    1), and soSU i becomes a clusterhead if it has the highest clustering

    metric, specically the highest number of available channels

    (NlistChannelsi NlistChannels jNNlNBi), among its non-clustered

    SU neighbor nodes. Subsequently, the new clusterhead ranks

    its available channels listChannelsi using the channel capac-tity metric k(see Eq. (1) in Section 5.1) and selects a mas-

    ter channel with the highest rank tchan,k

    = 1, and a backupchannel with the second highest rank t

    chan,k= 2; and subse-

    quently broadcasts this information using clusteringmessage

    CHinfo. However, if a SU i does not have the highest cluster-ingmetric among its non-clustered SU neighbor nodes, it sets

    a timer Tw,CHE to allow non-clustered SU neighbor nodeswith

    the highest clusteringmetric among the respective neighbor-

    hood to become clusterhead and joins the cluster with the

    highest rank. Note that, if a SU does not receive any cluster-

    ing message CHinfo from any clusterhead upon the expira-tion of the timer Tw,CHE , it starts another round of process for

    non-clustered node.

    The packet exchange for node joining and clusterhead

    election in cluster formation are presented in Fig. 6.

    5.4. Gateway node selection

    In CRNs, adjacent clusters may operate on different chan-

    nels due to the availability of multiple channels in the net-

    work, and so a single gateway node is insucient to provide

    two-way inter-cluster communications. Specically, given

    that a clusterhead does not switch from its master channel,

    a two-way inter-cluster communication may require each

    cluster to select a distinct gateway node in order to send

    packets between the clusters. There are two cases depending

    on the number of hops between clusterheads in the adjacent

    clusters: (a) two hops, and (b) more than two hops. These

    situations are best explained using examples alongside with

    the procedures.

    We rst present the procedures for gateway node se-

    lection which are presented in Algorithms 2(a) and 2(b)

    at member node MNi, j and clusterhead CHj, respectively.

    In Algorithm 2a, a member node MNi, j periodically scans

    Algorithm 2(a) Gateway node selection procedure at mem-

    ber nodeMNi, j.

    1: while listChannelsMNi, j do

    2: Scan each channel k for Tw,scan duration;

    3: if receive CHinfo from Cl then4: Send GREQi, j,l to clusterhead CHj;

    5: end if

    6: end while

    7: if receive GACCi, j,l from clusterhead CHj then

    8: nodeStateMNi, j GNi, j,l;9: end if

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 205Fig. 5. Cluster formation ow chart.

    Fig. 6. Packet exchange for (a) node joining and (b) clusterhead election.

    each of its available channels listChannelsMNi, j for Tw,scan

    duration within each time window in order to discover

    neighboring clusters. Suppose, it receives clustering message

    CHinfo, which consists of clusterID, masterChannel,backupChannel and commonChannels from a neighbor-ing cluster Cl. Then, it sends gateway role request message

    (GREQi, j, l) to its clusterhead CHj in order to inform its own

    clusterhead about its potential role as a gateway node for

    cluster Cl. It may be possible that a clusterhead already has a

    gateway node to a neighboring cluster. However, SMART en-

    ables a clusterhead to explore other potential gateway nodes

    which may have lower number of hops leading to neighbor-

    ing clusterhead and higher number of available channels. As

    shown in Algorithm 2(b), the clusterhead CHj then informs

    its member node MNi, j to serve as a gateway node GNi, j, lto the respective neighboring cluster Cl by sending gateway

    role acceptance message (GACCi, j, l); which happens due to

    two reasons. First there is lack of a gateway node to cluster

    Cl (i.e., GNmCj , j,l = ). Second, the member node MNi, j hasthe least number of hops leading to the clusterhead CHl of

    Algorithm2(b)Gateway node selection procedure at cluster-

    head CHj.

    1: Receive GREQi, j,l fromMNi, j;

    2: if GNmCj , j,l = then3: Send GACCi, j,l toMNi, j;

    4: else if HMNi, j ,Cl < HGNm, j,l ,Cl then

    5: Send GACCi, j,l toMNi, j;

    6: Send GDECm, j,l to GNm, j,l;

    7: else if HMNi, j ,Cl > HGNm, j,l ,Cl then

    8: Send GDECi, j,l toMNi, j;

    9: else if HMNi, j ,Cl = HGNm, j,l ,Cl then10: if NlistChannelsMNi, j

    > NlistChannelsGNm, j,lthen

    11: Send GACCi, j,l toMNi, j;

    12: Send GDECm, j,l to GNm, j,l;

    13: else

    14: Send GDECi, j,l toMNi, j;

    15: end if

    16: end if

  • 206 Y. Saleem et al. / Computer Networks 91 (2015) 196224

    Algorithm 3(a) Cluster merging procedure at gateway node

    GNi, j, l.

    1: /* Part I: Gateway node determines potential cluster merging */2: if nc,Cj,l HC,merge then3: SendMREQi, j,l to CHj and CHl;

    4: end if5: /* Part II: Gateway node performs or cancels cluster merging */6: if receive MACCi, j,l and MACCi,l, j from CHj and CHl , respec-

    tively then7: Create Cm;8: clusterIDm i;9: CHm GNi, j,l;10: nodeStatei CHi;11: for k in listChannelsi do12: Calculate t

    kusing Eq.(1)

    13: end for14: Update t

    chan,ksuch that t

    chan,k> t

    chan,mif t

    k> tm | k

    listChannelsi,m listChannelsi;15: masterChannel = k | t

    chan,k= 1;

    16: backupChannel = k | tchan,k

    = 2;17: Broadcast CHinfo;18: Send RELj,m and RELl,m to CHj and CHl , respectively;

    19: else if receiveMDECi, j,l from CHj orMDECi,l, j from CHl then

    20: SendMCANi, j,l to CHj and CHl;

    21: end if

    Algorithm 3(b) Cluster merging procedure at clusterhead

    CHj.

    1: /* Part I: Clusterhead makes decision for cluster merging */

    2: if receiveMREQi, j,l from GNi, j,l then

    3: ifMREQ, j, = then4: SendMACCi, j,l to GNi, j,l;

    5: elseFig. 7. Gateway nodes and joint gateway nodes.

    the neighboring cluster Cl as compared to an existing gate-

    way node GNm, j, l (i.e., HMNi, j ,Cl < HGNm, j,l ,Cl ). If there are po-

    tential and existing gateway nodes with similar number of

    hops leading to the clusterhead of the neighboring cluster,

    the member nodeMNi, j with the highest number of available

    channels NlistChannelsMNi, jis selected. Otherwise, the cluster-

    head CHj declines the request by sending gateway role de-

    cline message (GDECi, j, l) to its member nodeMNi, j.

    Next, we present an example of the rst case in which

    adjacent clusterheads communicate with each other in two

    hops using a single gateway node. Fig. 7 shows that a clus-

    terhead CH1 in cluster C1 sends a data packet to a neigh-

    boring cluster C2 operating on a different master channel.

    First, clusterhead CH1 forwards data packets to its gateway

    node GN1,1, 2 using its master channel. Second, gateway node

    GN1,1, 2 switches to the master channel of neighboring clus-

    ter C2 and forwards data packets to clusterhead CH2. Third,

    upon completion of data packets transmissions, gateway

    node GN1,1, 2 switches back to the master channel of its own

    cluster C1. Suppose, clusterhead CH2 wants to send data pack-

    ets to clusterhead CH1. It cannot forward data packets to gate-

    way node GN1,1, 2 unless it rst switches to the master chan-

    nel of cluster C1. However, clusterhead CH2 cannot switch

    from the master channel of its own cluster. Hence, cluster-

    head CH2 selects gateway node GN1,2,1, which can switch its

    operating channel to the master channel of cluster C1, in or-

    der to forward data packets to clusterhead CH1.

    Next, we present an example of the second case in which

    adjacent clusterheads communicate with each other in morethan two hops using more than a single gateway node. A

    set of gateway nodes connecting two clusterheads is called

    joint gateway nodes [35]. Fig. 7 illustrates gateway nodes (i.e.,

    GN1,1, 2 and GN1,2,1) in the rst case and a set of joint gateway

    nodes (i.e., GN2,2,3 and GN1,3,2) in the second case. Using joint

    gateway nodes, a two-way inter-cluster communication uses

    a similar set of gateway nodes.

    5.5. Cluster merging

    Cluster merging is the process of combining two clusters

    into one. In SMART, clustermerging is only possible when the

    number of common channels nc,Ci, j between clusters i and j

    satises a threshold for cluster merging HC,merge, specically

    theminimumnumber of common channels in amerged clus-

    ter, for cluster stability (see C.2 in Section 1).5.5.1. Procedure

    Algorithms 3(a), 3(b) and 3(c) present the cluster merging

    procedure for cluster Cj at gateway node GNi, j, l, clusterhead

    CHj and member nodeMNi, j, respectively.

    In part I of Algorithm 3(a), a gateway node GNi, j, l is aware

    of a set of common channels nc,Cj,l between its cluster Cj

    and neighboring cluster Cl to which it is connected to. Thus,

    whenever a gateway node GNi, j, l discovers a potential clus-

    ter merging activity (i.e., nc,Cj,l HC,merge), it informs bothclusterheads CHj and CHl by sending cluster merging request6: SendMDECi, j,l to GNi, j,l;

    7: end if

    8: end if

    9: /* Part II: Clusterhead receives cluster merging cancellation

    message */

    10: if receiveMCANi, j,l from GNi, j,l then

    11: MREQi, j,l ;12: end if

    13: /* Part III: Existing clusterhead receives clusterhead relin-

    quishment message from new clusterhead */

    14: if receive REL j,m from CHm then

    15: Send JRECi, j,m i MN, j;16: nodeState j MNj,m;17: ClusterID j m;18: CHj CHm;19: end if

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 207Algorithm 3(c) Cluster merging procedure at member node

    MNi, j.

    1: /* Part I: Member node receives new cluster joining recom-

    mendation from existing clusterhead*/

    2: if receive JRECi, j,m from CHj then

    3: Send JREQi,m to CHm;

    4: if not receive JACCi,m within Tw,res from CHm then

    5: Send LREQi, j,m to CHj;

    6: else

    7: nodeStatei MNi,m;8: ClusterIDi m;9: CHi CHm;10: end if

    11: end if

    12: /* Part II: Relinquished clusterhead (i.e., currently mem-

    ber node MNj,m) receives relay request from member node

    MNi,m */

    13: if receive LREQi, j,m fromMNi,m then

    14: nodeState j RNj,m;15: Send LINFi, j,m to CHm forMNi,m;

    16: Send LACCi, j,m toMNi,m;

    17: end if

    message (MREQi, j, l). In part I of Algorithm 3(b), a cluster-

    head CHj may accept the cluster merging request by sending

    cluster merging acceptance message (MACCi, j, l) to GNi, j, l to

    merge clusters. This occurs when CHj is not undergoing clus-

    ter merging with another cluster (i.e.,MREQ, j, = ). Subse-quently, in part II of Algorithm 3(a), a gateway node GNi, j, lreceives MACCi, j, l and MACCi, l, j from both clusterheads CHjand CHl respectively, and becomes the clusterhead CHm of

    the newly-formed cluster Cm. After becoming a clusterhead,

    it selects master and backup channels, broadcasts CHinfo

    and sends clusterhead role relinquishment messages RELj,mand RELl,m to CHj and CHl respectively, so that they relin-

    quish their clusterhead role and become its member nodes.

    Note that, one of the clusterheads, say clusterhead CHj, may

    not agree to merge and send cluster merging decline mes-

    sage (MDECi, j, l) to GNi, j, l. In this case, as shown in part II

    of Algorithm 3(b), the gateway node GNi, j, l informs both

    clusterheads about the cancellation of the cluster merging

    procedure by sending cluster merging cancellation message

    (MCANi, j, l). This allows both clusterheads to accept cluster

    merging requests from other gateway nodes by ignoring the

    current cluster merging procedure (i.e.,MREQi, j,l ).In part III of Algorithm 3(b), upon receiving clusterhead

    relinquishment message RELj,m by existing clusterhead CHjfrom new clusterhead CHm, the clusterhead CHj informs its

    member nodesMN, j to join the new clusterhead CHm on thenew master channel by sending cluster joining recommen-

    dation message JRECi, j,mi MN, j, marks itself as a mem-ber node of the new clusterhead CHm (i.e., nodeStatej MNj,m), and sets its clusterhead being the new one (i.e., CHj CHm).

    In part I of Algorithm 3(c), there are two circumstances

    when a member node receives cluster joining recommenda-

    tion message. First, a member node is located in the trans-

    mission range of the new clusterhead CHm, and so after re-

    ceiving cluster joining recommendation message JRECi, j,mfrom its existing clusterhead CHj, it sends cluster joining

    request JREQ i,m to the new clusterhead CHm and joins its

    cluster upon receiving cluster joining acceptance message

    JACCi,m. Second, the member node is located out of the

    transmission range of the new clusterhead CHm, and so it

    does not receive cluster joining acceptance message JACCi,mwithin Tw,res duration. In this case, it sends relay request

    message LREQ i, j to its relinquished clusterhead CHj (cur-

    rently MNj,m), so that it serves as a relay node to the new

    clusterhead CHm.

    In part II of Algorithm 3(c), the relinquished clusterhead

    (currently member node MNj,m), upon receiving relay re-

    quest message LREQ i, j from member node MNi,m, changes

    its state to relay node RNi, j, sends relay information message

    LINFi, j,m to the new clusterhead CHm informing about its role

    as relay node for the member node MNi,m, and sends relay

    acceptance message (LACCi, j, l) to member nodeMNi,m.

    5.5.2. Example

    An example of cluster merging is given in Fig. 8. Suppose,

    the cluster merging threshold is 2 (i.e., HC,merge = 2). Gate-way node GN1,1, 2 discovers a set of common channels be-

    tween clusters C1 and C2 (i.e., {2,3}), and it fullls the thresh-

    old HC,merge. Thus, it informs both clusterheads CH1 and CH2about the potential cluster merging activity in which it can

    serve as a clusterhead. Suppose, both clusterheads agree to

    merge, then each of them sends a positive response and

    their respective list of member nodes to the gateway node.

    Next, the gateway node GN1,1, 2 becomes clusterhead CH1 and

    the existing clusterheads relinquish their roles and become

    member nodes of clusterhead CH1. Since member nodes

    MN1,1 and MN1,2 are located within the transmission range

    of the new clusterhead CH1, both of them become member

    nodes of the new clusterhead. Since member nodes MN2,1,

    MN2,2 andMN3,2 are located out of the transmission range of

    the new clusterhead, the relinquished clusterheads become

    relay nodes (i.e., RN1 and RN2) for these nodes.

    5.6. Cluster splitting

    Cluster splitting is the process of splitting one cluster into

    two clusters. In SMART, cluster splitting occurs when a clus-

    terhead CHj nds that the number of common channels in a

    cluster nc,Cj is below a threshold for the minimum number

    of common channels in a cluster due to PUs activities (i.e.,

    nC,Cj < HC,min) for cluster stability (see C.2 in Section 1).

    5.6.1. Procedure

    Algorithm 4 represents the cluster splitting procedure for

    cluster Cj at clusterhead CHj. A clusterhead has knowledge

    about the list of available channels of all nodes in its cluster.

    In part I of Algorithm 4, when clusterhead CHj determines

    that cluster splitting is required (i.e., nC,Cj < HC,min), it counts

    the number of nodes NNCj ,k in each available channel k listChannelsCj and ranks these channels based on maximum

    node degree (i.e., tchan,k

    > tchan,l

    if NNCj ,k > NNCj ,l | k listChannelsCj , l listChannelsCj k). Then, it selects a cer-tain number of most favorable channels as common channels

    of the new cluster Ck. The number of selected channels mustsatisfy the threshold for the minimum number of common

  • 208 Y. Saleem et al. / Computer Networks 91 (2015) 196224Fig. 8. An example of cluster merging from

    Algorithm 4 Cluster splitting procedure at clusterhead CHj.

    1: if nC,Cj < HC,min then

    2: /* Part I: Rank available channels based on maximum

    node degree */

    3: Count NNCj ,k | k listChannelsCj ;4: Rank channels t

    chan,k> t

    chan,lif NNCj ,k > NNCj ,l | k

    listChannelsCj , l listChannelsCj k;5: /* Create a new cluster Ck */

    6: commonChannelsCk N tchan,k = 1,2, . . . , HC,min;7: Identify listNodesCk |commonChannelsCk

    listChannelsMNi, j , i listNodesCj ;8: Create cluster Ck listNodesCk ;9: /* Part II: Create second cluster Cl */

    10: listNodesCl listNodesCj listNodesCk ;11: commonChannelsCl

    clusterSizeCli=1 listChannelsMNi,l| NcommonChannelsCl = HC,min;

    12: Create cluster Cl listNodesCl ;13: /* Part III: Select clusterheads for new clusters and broad-

    cast CHinfo */14: Select MNi,k as CHk if NlistChannelsMNi,k

    > NlistChannelsMNj,k| j listNodesCk MNi,k;

    15: SelectMNi,l as CHl if NlistChannelsMNi,l> NlistChannelsMNj,l

    |j listNodesCl MNi,k;

    16: Broadcast clustering message CHinfo;17: if not (CHj = CHk and CHj = CHl) then18: if CHj nodeListCk then19: nodeStateCHj MNi,k;20: else if CHj nodeListCl then21: nodeStateCHj MNi,l;22: end if

    23: end if

    24: end if

    channels in a cluster HC,min (i.e., commonChannelsCk N t

    chan,k= 1,2, . . . , HC,min). The clusterhead then identies

    a list of nodes (listNodesCk ) that fulll this criteriaand forms a cluster comprised of these nodes (i.e., Ck listNodesCk ).

    Next, in part II of Algorithm 4, the clusterhead CHj iden-

    ties the remaining nodes, which are yet to be clusteredclusters C1 and C2 to cluster C1.

    for new cluster Cl (i.e., listNodesCl listNodesCjlistNodesCk ), and identies a set of common chan-nels among these nodes (i.e., common-ChannelsCl clusterSizeCli=1 listChannelsMNi,l ). The number of common chan-

    nels must equal to the threshold for the minimum number of

    common channels in a cluster (i.e., NcommonChannelsCl= HC,min).

    The clusterhead forms another cluster comprised of these

    nodes Cl listNodesCl .Finally, in part III of Algorithm 4, the clusterhead CHj se-

    lects a node as clusterhead which has the highest clustering

    metric (i.e., the highest number of available channels among

    nodes in a cluster) for both newly created clusters Ck and Cl,

    and broadcasts clustering message CHinfo. Subsequently,the new clusterheads CHk and CHl, elected by the old clus-

    terhead CHj, select their master and backup channels and

    broadcast CHinfo, which is not shown in the algorithm.Moreover, if the clusterhead CHj is not elected as a cluster-

    head after cluster splitting, it relinquishes its role of cluster-

    head and becomesmember node of its respective cluster (i.e.,

    nodeStateCHj MNi,k or nodeStateCHj MNi,l).

    5.6.2. Example

    An example of cluster splitting is given in Fig. 9.

    Initially, cluster C1 is comprised of six nodes with

    masterChannel=5, backupChannel=6 andcommonChannels ={5,6}. Suppose, channels 5 and 6are re-occupied by PUs and the threshold for the minimumnumber of common channels in a cluster is HC,min = 2. So,there is no common channel available for intra-cluster com-

    munication (i.e., nC,CC1= 0 < HC,min = 2) and thus cluster

    splitting takes place. The clusterhead CH1 is aware of a list

    of available channels listChannels of all nodes in its cluster.

    Thus, it counts the number of nodes in each available channel

    and ranks these channels based on maximum node degree.

    In Fig. 9, channel 1 is available to four nodes (i.e., CH1, MN1,1,

    MN2,1 and MN3,1), channel 2 is available to ve nodes (i.e.,

    CH1, MN1,1, MN2,1, MN3,1 and MN4,1), channel 3 is available

    to four nodes (i.e., CH1, MN1,1, MN2,1 and MN3,1), channel

    4 is available to three nodes (i.e., CH1, MN4,1 and MN5, 1),

    and channel 7 is available to three nodes (i.e., CH1, MN4,1and MN5, 1). Therefore, channels are ranked as

    tchan,2

    = 1, tchan,1

    = 2, tchan,3

    = 3, tchan,4

    = 4 and tchan,7

    = 5. After-ward, the channels, which fulll HC,min = 2 channels, are

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 209

    ting fromFig. 9. An example of cluster split

    ranked rst and second (i.e., channels 2 and 1) and identies

    the nodes having these channels in their respective list of

    available channels (e.g., CH1, MN1,1, MN2,1 and MN3,1). Sub-

    sequently, it forms a cluster of these nodes (i.e., cluster C1in Fig. 9 after cluster splitting), selects a node as clusterhead

    which has the highest clustering metric (i.e., the highest

    number of available channels in a cluster), and sends cluster-

    ing message CHinfo containing information of the newlycreated cluster, including member nodes and clusterhead,

    to all nodes in cluster C1. Since the clusterhead CH1 has the

    highest clustering metric, it remains as the clusterhead in

    the newly created cluster. Subsequently, it selects master

    and backup channels, and broadcasts CHinfo.Next, the clusterhead CH1 identies common channels

    among the remaining nodes in the cluster. It identies that

    MN4,1 and MN5, 1 are the remaining nodes having two chan-

    nels (i.e., channels 4 and 7) in common. Since these channels

    fulll the threshold HC,min = 2, the clusterhead CH1 createsanother cluster for them (i.e., cluster C2 in Fig. 9 after cluster

    splitting) and selectsMN4,1 as a clusterhead for cluster C2 as it

    has the highest clusteringmetric. Subsequently, it sends clus-

    tering message CHinfo containing information of the newcluster C2, includingmember nodes and clusterhead, to these

    remaining nodes of the cluster. Finally, at the end of cluster

    splitting, there are two clusters (e.g., C1 and C2), one is com-

    prised of four nodes and the other is comprised of two nodes,

    as shown in Fig. 9.

    6. SMART: routingThis section presents routing scheme of SMART which

    runs on a clustered network. The cluster-based routing

    scheme is based on a RL-based routing model known as

    Q-routing, which was proposed by Boyan and Littman [36].

    Q-routing is inspired by a RL approach known as Q-learning

    [15]. The main objective of the routing scheme is to provide

    stable routes with higher OFF-state probabilities of channels

    along the routes in order to reduce SUs transmission inter-

    ruption due to the re-appearance of PUs activities, particu-

    larly the number of channel switches and the occurrence of

    re-routing. This leads to enhanced SUs network performance

    and so it is expected to contribute to the improvement of SUs

    energy eciency. Our approach is different from the tradi-

    tional Q-routing approach which is based on end-to-end de-

    lay [36], as our approach is based on OFF-state probability ofcluster C1 to clusters C1 and C2.

    the bottleneck channel of a route. To the best of our knowl-

    edge, the application of RL to cluster-based routing in CRNs

    is novel in its approach.

    In SMART, the Q-routing model is embedded in each SU

    because it stands an equal opportunity to serve as a cluster-

    head. The routing decision (i.e., SU next-hop neighbor node

    selection) is made by the clusterhead of a SU source node and

    the intermediate clusterheads based on channel selection

    performed by clustering. The important representations in

    the RLmodel for an agent are state, action and reward. State st

    represents the decision-making factors, which affect the re-

    ward (or network performance), observed by an agent from

    the operating environment. Action atirepresents an agent is

    action, which may change or affect the state (or operating

    environment) and reward (or network performance), and so

    the agent learns to take optimal actions at most of the times.

    Reward rt+1i

    (st+1) represents the positive or negative effectsreceived at time t + 1 of an agents action on its operatingenvironment taken at time t. At decision epoch t, an agent i

    observes its operating environment to determine its current

    state st. Based on the state st, the agent chooses an action ati.

    Next, at decision epoch t + 1, the state st changes to st+1 as aconsequence of the action at

    i, and the agent i receives reward

    rt+1i

    (st+1).In SMART, the state st represents the SU destination node

    and the action atirepresents SU is next-hop neighbor node

    that relays packets toward SU destination node st. At time t,

    a clusterhead i estimates the Q-value Qti(st , at

    i) for each SU

    neighbor node ati, which indicates the OFF-state probabil-ity of the bottleneck channel along a route and updates its

    routing table of Q-values as shown in Table 6. The bottleneck

    channel is the channel of a link having the least OFF-state

    probability for the next time slot along a route. Using the

    OFF-state probability of channels helps to reduce the effects

    of the dynamicity of channel availability in CRNs by selecting

    stable routes that have channels with higher OFF-state prob-

    ability in order to maximize the utilization of white spaces

    (see C.1 in Section 1). Each column in Table 6 shows a SU

    neighbor node of the SU source node i, while each row shows

    a destination node st. Each cell represents the Q-value of a

    next-hop neighbor node atiselected by SU source node i to

    reach the SU destination node st. A clusterhead calculates

    Q-value for each SU neighbor node, while SU intermedi-

    ate node calculates the OFF-state probability of the bottle-

    neck channel from itself toward the destination node st and

  • 210 Y. Saleem et al. / Computer Networks 91 (2015) 196224

    e node i.

    2 3 m

    Qti(1,2) Qt

    i(1,3) Qt

    i(1,m)

    2,1) Qti(2,3) Qt

    i(2,m)

    3,1) Qti(3,2) Qt

    i(3,m)

    3,1) Qti(n,2) Qt

    i(n,3) Qt

    i(n,m)

    know a route to destination node st = l (i.e., routei,l = ), soit triggers route discovery. It creates a RREQ message and

    includes its own node ID in route record RREQRREC,i,l i.Then, it oods the RREQmessage by broadcasting on its mas-

    ter channel. When a gateway node GNi, j, l receives the RREQ

    message from its clusterhead CHi, it forwards the message to

    its neighboring clusters.

    In part I of Algorithm 5, when a SU intermediate cluster-

    head m receives a RREQ message, it updates the RREQ mes-

    sage by appending its own node ID m to the list of route

    record (i.e., RREQRREC,i,l RREQRREC,i,l m). Subsequently, itrebroadcasts the message on its master channel if it is not

    the SU destination node l, which may be forwarded by gate-

    way nodes to neighboring clusters. When a RREQ message

    is received by a SU destination node l, it generates a RREPTable 6

    Routing table of Q-values at SU sourc

    SU neighbor node ati

    1

    1

    SU destination node st 2 Qti(

    3 Qti(

    n Qti(

    forwards this probability to an upstream node in RREP mes-

    sage. A clusterhead i makes route selection by selecting a

    SU next-hop neighbor node atiwith the maximum Q-value

    Qti(st , at

    i) from the routing table of Q-values for destination

    node st. Upon sending RREQ to SU neighbor node ati, a SU i

    receives the OFF-state probability of the bottleneck channel

    for the route from SU neighbor node atito destination node

    st through RREP. A SU i updates its Q-value Qti(st , at

    i) at time

    t + 1 for destination node st via next-hop ati= j as follows:

    Qt+1i

    (st , j) ((1 ) Qti (sti , j))

    +[ min

    (rt+1i

    ( j),maxkat

    j

    Qtj(st , k)

    )](2)

    where 0 1 represents the learning rate, rt+1i

    ( j) is theOFF-state probability of the operating channel between SU i

    and SU neighbor node j, Qtj(st , k) is the OFF-state probability

    of the bottleneck channel along a route from SU node k atj

    (i.e., a SU next-hop neighbor node of SU j) to SU destina-

    tion node st, and min (rt+1i

    ( j),maxkatjQt

    j(st , k)) represents

    the OFF-state probability of one of the bottleneck channels

    among rt+1i

    ( j) and maxkatjQt

    j(st , k). This means that either

    the link connecting SU i and SU j, or one of the links in the

    route established between SU j and SU destination node st is

    the bottleneck link.

    Using this model, SUs learn about the routes on the y.

    Due to different levels of PUs activities, the routes may have

    different OFF-state probabilities of channels. Thus, the se-

    lected route will have higher OFF-state probabilities of chan-

    nels which will make the route stable as compared to other

    available routes. The rest of this section presents the proce-

    dures for route discovery and route selection using RL, fol-

    lowed by an example.

    6.1. Procedure

    SMART is designed for ad hoc CRN which is character-

    ized by the dynamicity of channel availability due to differ-

    ent levels of PUs activities. The exchange of RREQ and RREPmessages is an ecient way to discover dynamic routes in

    SMART. In SMART, RREQ message is used to nd a route from

    a SU source node to a SU destination node if the SU source

    node is not aware of any route or the existing route toward

    the SU destination node is expired. RREP message is used to

    inform the SU source node about a route toward a SU desti-

    nation node and the OFF-state probability of the bottleneck

    channel along the route. Since a SU member node sends its

    data to its clusterhead, which serves as a point of process

    for all member nodes in its cluster, we consider a SU clus-

    terhead as the source node. Suppose, a clusterhead i does notmessage using the route found in the RREQ message. In part

    II of Algorithm 5, when a SU intermediate node m receives

    a RREP message from its next-hop neighbor node n, it calcu-

    lates the OFF-state probability of the bottleneck channel from

    itself to SU destination node st = l via its next-hop neighbornode n (i.e., Qtm(l,n)), embeds it into the RREP message andforwards to the downstream node in the list of route record

    RREPRREC,i,l . The RREP message follows the reverse route that

    the RREQ message traverses so that it reaches the SU source

    Algorithm 5 RREQ propagation and RREP processing at SU

    nodem.

    1: /* Part I: RREQ propagation */

    2: if receive RREQ andm / RREQRREC,i,l then3: RREQRREC,i,l RREQRREC,i,l m;4: ifm = l then5: /* l is the destination node */

    6: Create RREP;

    7: Send RREP using the reverse path in RREQRREC,i,l;

    8: else

    9: Rebroadcast RREQ;

    10: end if

    11: /* Part II: RREP processing */12: else if receive RREP from n then

    13: ifm = i then14: /* i is the source node */

    15: Update Qti(l,n) using Eq. (2);

    16: else

    17: Calculate probability of bottleneck channel

    Qtm(l,n);18: Embed Q-value in RREP;

    19: Forward RREP;

    20: end if

    21: end if

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 211

    outing f

    CFig. 10. An example of r

    node i. When SU source node i receives a RREP message, it

    updates its routing table of Q-values using Eq. (2) for SU des-

    tination node st and performs route selection.

    Once routes have been discovered and the routing table

    of Q-values has been updated at SU source node i, it selects

    a next-hop clusterhead with the highest Q-value from the

    routing table of Q-values for SU destination node st and starts

    data transmission along the selected route.6.2. Example

    Fig. 10 shows a cluster-based routing example inwhich SU

    source nodeMN1,1 sends data packets to SU destination node

    BS. Clusterhead CH1 initiates route discovery by broadcast-

    ing a RREQ message, with route record list RREQRREC,CH1,BS =H1, on its masterChannel=1. When gateway nodes

    GN1,1,2 and GN2,1,3 receive the message, they forward it to

    their respective neighboring clusterheads CH2 and CH3.

    When clusterhead CH2 receives the RREQ message, it ap-

    pends its address in route record list (i.e., RREQRREC,CH1,BS =[CH1,CH2]) and forwards it to its neighboring clusterhead

    CH4 via its gateway node GN2,2,4. Similar process runs on

    clusterheads CH4 and CH3 for RREQ propagation. Finally,

    there are two routes received at SU destination node BS,

    specically routes CH1CH2CH4 BS and CH1CH3 BS. The

    routes are performed at cluster level, therefore gateway

    nodes are not included in these routes. Subsequently, BS gen-

    erates RREP messages and sends them back toward the SU

    source node CH1 using the reverse routes that RREQ mes-

    sages traverse.

    When clusterhead CH4 receives the RREP message from

    SU destination node BS, it updates its Q-value with the

    OFF-state probability of operating channel between its gate-

    way node GN2,4, BS and destination node BS (i.e., QtCH4

    (BS,) = 0.4). The next-hop of cluster C4 is the SU destina-

    C

    Crom cluster C1 to SU BS.

    tion node BS, which is represented by. Subsequently, it em-

    beds this Q-value in the RREP message and forwards it to

    the upstream clusterhead CH2 via its gateway node GN1,4,2.

    When clusterhead CH2 receives the RREP message from clus-

    terhead CH4 through its gateway node GN2,2,4, it compares

    the OFF-state probability provided by clusterhead CH4 with

    OFF-state probability of the operating channel between the

    pair of gateway nodes GN2,2,4 and GN1,4,2 for reaching clus-

    terhead CH4, and nds that this communication channel has

    lower OFF-state probability (i.e., t = 0.3 < t = 0.4), there-

    4 5

    fore it updates theQ-value (i.e.,QtCH2

    (BS,CH4) = 0.3), embedsit in the RREP message and forwards it to its upstream node

    (i.e., SU source node CH1) via gateway node GN1,2,1.

    When SU source node CH1 receives the RREP mes-

    sage from clusterhead CH2, it updates its routing table of

    Q-values. Since this is the rst attempt of route discov-

    ery, the Q-value has been initialized Qt1CH1

    (BS,CH2) = 0.The OFF-state probability of the operating channel be-

    tween SU source node CH1 and clusterhead CH2 is t2

    =0.5, therefore rt

    CH1(CH2) = 0.5. Hence, using Eq. (2) with

    = 0.5, which is discussed in Section 6, the Q-valueQtCH1

    (BS,CH2) is updated by QtCH1

    (BS,CH2) ((1 0.5) 0) + (0.5 min (0.5,0.3)) = 0.15. Similar process runs onclusterhead CH3 to process RREP. Hence, at SU source node

    CH1, the Q-value is updated as QtCH1

    (BS,CH3) ((1 0.5) 0) + (0.5 min (0.1,0.4)) = 0.05.

    Finally, the routing table of SU source node CH1has two entries, specically, Qt

    CH1(BS,CH2) = 0.15 and

    QtCH1

    (BS,CH3) = 0.05. It selects CH2 as its SU next-hopnode because it provides the highest Q-value for the route

    leading to SU destination node BS. Note that the route CH1

    H2CH4BS is selected, although it is longer than route

    H1CH3BS because it is more stable with higher OFF-state

    probability of bottleneck channel along the route.

  • 212 Y. Saleem et al. / Computer Networks 91 (2015) 1962246.3. Route update

    Routing entriesmay become stale due to PUs activities, as

    well as the changes of the underlying cluster structure caused

    by cluster merging or splitting. Therefore, regular route up-

    date is performedwhenever a SU i has data to send to SU des-

    tination node st. During a route update, a SU i rst veries its

    existing route entry Qti(st , at

    i) by sending a dummy message

    to a next-hop neighbor node atileading toward a SU destina-

    tion node st. If it receives a positive acknowledgement, then

    the route is valid. Otherwise, the route is invalid and a new

    route needs to be discovered using Algorithm 5.

    7. Analysis of SMART overhead

    In this section, we investigate the overhead of SMART,

    specically the message overhead and time complexity. The

    message overhead M is dened as the number of messages

    exchanged among SUs incurred by cluster formation and

    cluster maintenance (i.e., cluster merging and splitting), as

    well as gateway node selection, which are all required to

    form and maintain clusters. If a SU sends a message to its

    neighboring SUs, the message overhead is increased by one

    [37]. The time complexity T is dened as the number of time

    steps incurred by the clustering algorithm to form and main-

    tain clusters, specically, from the moment a cluster requires

    changes due to the changes of the underlying topology or

    spectrum until a valid cluster structure is formed again [38].

    We assume discrete time steps. One time step is the time

    incurred between the transmission of a message from a SU

    sender node and the completion of processing the message

    at a SU receiver node [37].

    Various procedures in SMART incurs overheads, includ-

    ing clustering message CHinfo exchange, cluster formation,gateway node selection, cluster merging, cluster splitting, as

    well as routing.

    7.1. Clustering message exchange overhead

    Clustering messages CHinfo are broadcasted in everypredened interval so that SUs can learn clustering informa-

    tion from their SU neighboring nodes. We denote the num-

    ber of clustering messages CHinfo transmitted by a SU pertime step as rate rCHinfo, the total number of SUs involved in a

    clustering mechanism (e.g., cluster formation, gateway node

    selection, cluster merging and splitting) as N and the average

    number of time steps incurred by a clustering mechanism as

    D. Each SU transmits a certain number of clusteringmessages

    per time step at a rate rCHinfo in order to maintain neighbor-

    hood and clustering knowledge. This incurs an overhead of

    N rCHinfo packets per time step. To sum up, the messageoverhead of clustering message CHinfo is M = DNrCHinfomessages and the time complexity is T = D time steps.

    7.2. Cluster formation overhead

    Initially, each SU is in non-clustered state and it scans

    each of the available channels in order to discover neighbor-

    ing clusterheads and non-clustered nodes. We call this scan-

    ning period of cluster formation as Tscan. Hence, each SUmusttake at least Tscan time steps before associating itself with acluster either as a clusterhead or a member node. We denote

    the number of neighboring clusterheads of a SU as dch. If a SU

    decides to become a clusterhead, it takesM = 1 message andT = 1 time step. Otherwise, if a SU decides to join a neigh-boring clusterhead as member node, it sends a cluster join-

    ing request to each of them respectively until it is accepted

    by one of them. This process takes at least M = 2 messagesand T = 1 time step if the rst clusterhead accepts the join-ing request, and at mostM = 2dch messages and T = dch timesteps if the last clusterhead in the list accepts the joining re-

    quest. However, it may be possible that a SU decides to join

    a clusterhead as a member node, but none of the clusterhead

    accepts its joining request. So, a SU then decides to become a

    clusterhead. In this case, it takes M = 2dch + 1 messages andT = dch + 1 time steps. To sum up, we can say that the num-ber of messages and time required for a SU to associate it-

    self with a cluster strongly depends upon the network topol-

    ogy. Due to this fact, we can derive lower and upper bounds.

    For the lower bound, M = 1 message and T = Tscan + 1 timesteps are required, while for the upper bound, M = 2dch + 1messages and T = Tscan + dch + 1 time steps are required fora non-clustered node to associate itself with a cluster.

    7.3. Gateway node selection overhead

    Gateway node selection procedure is performed when a

    member node discovers a new link toward a neighboring

    cluster. A member node sends gateway role request message

    to its clusterhead upon such discovery which takes M = 1message and T = 1 time step. If the clusterhead decides toselect the respective member node as a gateway node, it con-

    rms this by sending back gateway role acceptance message

    to the member node which takes an additional M = 1 mes-sage and T = 1 time step. To sum up, the message overheadof gateway node selection is M = 2 messages and the timecomplexity is T = 2 time steps.

    7.4. Cluster merging overhead

    Whenever a gateway node discovers a potential cluster

    merging activity, it sends cluster merging request message to

    both clusterheads, which are one hop away, and so it takes

    M = 1 message and T = 1 time step. If both clusterheadsagree to merge, they send back cluster merging acceptance

    messages to gateway node which takes M = 2 messages andT = 1 time step. On receiving cluster merging acceptancemessages from both clusterheads, the gateway node becomes

    the new clusterhead and it broadcasts clustering message

    CHinfo which takes M = 1 message and T = 1 time step.Subsequently, when existing clusterheads receive clustering

    message CHinfo from the new clusterhead, they relinquishtheir role, join the new clusterhead and send cluster join-

    ing recommendation message to their respective member

    nodes m to join the new clusterhead which takes M = 2messages and T = 1 time step. The member nodes join thenew clusterhead if they are within the transmission range of

    the new clusterhead or send relay request message to their

    relinquished clusterheads if they are not within the trans-

    mission range of the new clusterhead. This incurs M = 2mmessages and T = 1 time step. Subsequently, the new clus-

    terhead sends the joining acceptance message and the two

  • Y. Saleem et al. / Computer Networks 91 (2015) 196224 213

    ssage ov

    rCHinfo2dch + 1

    + 11

    + HG +Table 7

    Summary of SMART overhead.

    Overhead type Me

    Clustering message exchange DN

    Cluster formation [1,

    Gateway node selection 2

    Clustering merging 2m

    Clustering splitting 6

    Routing nH

    relinquished clusterheads send relay acceptance message to

    their member nodes which takes M = 3 messages and T = 1time step. Finally, the relinquished clusterheads send relay

    information message to the new cluste