Using Feedback in Collaborative Reinforcement Learning to Adaptively Optimise MANET Routing
Citation:
Dowling J., Curran E., Cunningham R., Cahill V., Using Feedback in Collaborative Reinforcement Learning to Adaptively Optimise MANET Routing, IEEE Transactions on Systems, Man, and Cybernetics - Part A, 35, 3, 2005, 360, 372Download Item:
Abstract:
Designers face many system optimization problems
when building distributed systems. Traditionally, designers have
relied on optimization techniques that require either prior knowledge
or centrally managed runtime knowledge of the system?s
environment, but such techniques are not viable in dynamic
networks where topology, resource, and node availability are
subject to frequent and unpredictable change. To address this
problem, we propose collaborative reinforcement learning (CRL)
as a technique that enables groups of reinforcement learning
agents to solve system optimization problems online in dynamic,
decentralized networks. We evaluate an implementation of CRL
in a routing protocol for mobile ad hoc networks, called SAMPLE.
Simulation results show how feedback in the selection of links
by routing agents enables SAMPLE to adapt and optimize its
routing behavior to varying network conditions and properties,
resulting in optimization of network throughput. In the experiments,
SAMPLE displays emergent properties such as traffic flows
that exploit stable routes and reroute around areas of wireless
interference or congestion. SAMPLE is an example of a complex
adaptive distributed system.
Author's Homepage:
http://people.tcd.ie/vjcahillDescription:
PUBLISHED
Author: CAHILL, VINNY
Type of material:
Journal ArticleCollections
Series/Report no:
IEEE Transactions on Systems, Man, and Cybernetics - Part A35
3
Availability:
Full text availableKeywords:
Feedback, learning systems, mobile ad hoc network, routingMetadata
Show full item recordLicences: