Cognitive Engine Testbed for Vehicular Communications

Ad-hoc networks have the potential to increase the safety and reliability of autonomous vehicles. The amount of radio spectrum available for such networks is limited, however. The use of cognitive radio, especially when integrated with reinforcement learning algorithms, may help to ease the issue of limited spectrum by finding optimal transmission policies and detecting the presence of other users, especially in a scenario where a primary user and secondary user are contesting for spectrum. This paper presents a testbed for simulating cognitive engines in these networks using a variety of reinforcement learning algorithms, including ε-greedy, Softmax Strategy, and Q-Learning. The goal of these cognitive engines is to learn to choose the best modulation and coding rates given various channel conditions and a user-defined optimization goal (i.e. maximize throughput, minimize bit error rate). The cognitive engine then learns the optimal coding rates and modulation schemes for the given environment, and the testbed displays a visual of the performance of the cognitive engine at each time step. In the case of Q-learning in a primary-user vs secondary-user scenario, the cognitive engine also learns to choose the best channel for transmission.

CAT Vehicle 2019

Brandon Dominique (New Jersey Institute of Technology)

Daniel Fishbein (Missouri State University)
 Kreienkamp (University of Notre Dame)

Alex Day (Clarion University of Pennsylvania)
Sam Hum (Colorado College)
Riley Wagner
 (the University of Arizona)

Eric Av (Gonzaga University)
Hoang Huynh (Georgia State University-Perimeter College)
John Nguyen (University of Minnesota, Twin Cities)

Brandon Dominique's experience: A brief overview of the work that I did at the University of Arizona for their student-led self driving car project, the CAT Vehicle.