Reinforcement Learning Based Traffic Signal Control: A Performance Comparison Under Different Traffic Scenarios

Abstract
Traffic congestion, which is increasing in megacities in parallel with population growth and urbanization, has become one of the most important problems in modern urban transportation. In this context, the effectiveness of control strategies at signalized intersections is even more important. In this study, a comparative evaluation of different traffic signal control strategies was conducted in a single intersection scenario using the SUMO Urban Mobility Model. The main objective of the study is to demonstrate the effectiveness of the reinforcement learning (RL) approach in adaptive traffic signal control, especially in comparison with traditional and rulebased methods. In this context, four different control methods were implemented and tested: fixed-time control, vehicleinitiated (actuated) control specific to the SUMO platform, a fuzzy logic controller and an RL-based controller. The RL model was trained on episodes of different durations (50, 150, 300 and 500) to study the performance changes during the training process. The simulations were conducted under two different demand scenarios: low traffic density (3196 vehicles) and high traffic density (6748 vehicles). The average waiting time and average travel time were used to evaluate system performance. It can be observed that the RL-based method has poor performance when traffic volume is low but holds an overwhelming advantage in comparison with other control methods when traffic volume is high.

Reference:
Sinan İlgen, and Akif DURDU, “Reinforcement Learning Based Traffic Signal Control: A Performance Comparison Under Different Traffic Scenarios”, 2025 14th International Symposium on Advanced Topics in Electrical Engineering (ATEE), Bucharest, Romania, 2025, pp. 1-6.

Link: https://doi.org/10.1109/ATEE66006.2025.11299972