The University of Dublin | Trinity College -- Ollscoil Átha Cliath | Coláiste na Tríonóide
Trinity's Access to Research Archive
Home :: Log In :: Submit :: Alerts ::

TARA >
School of Computer Science and Statistics >
Computer Science >
Computer Science (Scholarly Publications) >

Please use this identifier to cite or link to this item: http://hdl.handle.net/2262/32669

Title: A collaborative reinforcement learning approach to urban traffic control
Other Titles: Proceedings of the Web Intelligence and Intelligent Agent Technology, 2008. WI-IAT '08. IEEE/WIC/ACM International Conference on
IEEE/WIC/ACM International Conference on Intelligent Agent Technology: IAT '08
Author: CAHILL, VINNY
SALKHAM, AS'AD
Author's Homepage: http://people.tcd.ie/vjcahill
http://people.tcd.ie/salkhama
Keywords: Computer Science
Issue Date: 2008
Publisher: IEEE Computer Society
Citation: As'ad Salkham, Raymond Cunningham, Anurag Garg, and Vinny Cahill., A collaborative reinforcement learning approach to urban traffic control, Proceedings of the Web Intelligence and Intelligent Agent Technology, 2008. WI-IAT '08. IEEE/WIC/ACM International Conference on, IEEE/WIC/ACM International Conference on Intelligent Agent Technology: IAT '08, Sydney, NSW, 9-12 Dec, 2, IEEE Computer Society, 2008, 560-566
Series/Report no.: 2
Abstract: The high growth rate of vehicles per capita now poses a real challenge to efficient Urban Traffic Control (UTC). An efficient solution to UTC must be adaptive in order to deal with the highly-dynamic nature of urban traffic. In the near future, global positioning systems and vehicle-tovehicle/ infrastructure communication may provide a more detailed local view of the traffic situation that could be employed for better global UTC optimization. In this paper we describe the design of a next-generation UTC system that exploits such local knowledge about a junction’s traffic in order to optimize traffic control. Global UTC optimization is achieved using a local Adaptive Round Robin (ARR) phase switching model optimized using Collaborative Reinforcement Learning (CRL). The design employs an ARR-CRL-based agent controller for each signalized junction that collaborates with neighbouring agents in order to learn appropriate phase timing based on the traffic pattern. We compare our approach to non-adaptive fixed-time UTC system and to a saturation balancing algorithm in a largescale simulation of traffic in Dublin’s inner city centre. We show that the ARR-CRL approach can provide significant improvement resulting in up to ~57% lower average waiting time per vehicle compared to the saturation balancing algorithm.
Description: PUBLISHED
Sydney, NSW
URI: http://hdl.handle.net/2262/32669
Appears in Collections:Computer Science (Scholarly Publications)

Files in This Item:

File Description SizeFormat
a collaborative.pdfpublished (publisher copy) peer-reviewed529.11 kBAdobe PDFView/Open


This item is protected by original copyright


Please note: There is a known bug in some browsers that causes an error when a user tries to view large pdf file within the browser window. If you receive the message "The file is damaged and could not be repaired", please try one of the solutions linked below based on the browser you are using.

Items in TARA are protected by copyright, with all rights reserved, unless otherwise indicated.

 

Valid XHTML 1.0! DSpace Software Copyright © 2002-2010  Duraspace - Feedback