Cognitive Engine Testbed for Vehicular Communications
Ad-hoc networks have the potential to increase the safety and reliability of autonomous vehicles. The amount of radio spectrum available for such networks is limited, however. The use of cognitive radio, especially when integrated with reinforcement learning algorithms, may help to ease the issue of limited spectrum by finding optimal transmission policies and detecting the presence of other users, especially in a scenario where a primary user and secondary user are contesting for spectrum. This paper presents a testbed for simulating cognitive engines in these networks using a variety of reinforcement learning algorithms, including ε-greedy, Softmax Strategy, and Q-Learning. The goal of these cognitive engines is to learn to choose the best modulation and coding rates given various channel conditions and a user-defined optimization goal (i.e. maximize throughput, minimize bit error rate). The cognitive engine then learns the optimal coding rates and modulation schemes for the given environment, and the testbed displays a visual of the performance of the cognitive engine at each time step. In the case of Q-learning in a primary-user vs secondary-user scenario, the cognitive engine also learns to choose the best channel for transmission.