CAT Vehicle 2019

“Although I wish the conference could have been in person so I could have met other researchers in Denver, I find myself very lucky to have worked with my team, and to have been able to participate in the CAT Vehicle REU program," said Christopher Kreienkamp, whose presentation was through a YouTube video due to the COVID-19 policies of the conference.  Kreienkamp and his partner Daniel Fishbein, were part of CAT Vehicle 2019.

Brandon Dominique (New Jersey Institute of Technology)

Daniel Fishbein (Missouri State University)
Christopher
 Kreienkamp (University of Notre Dame)

Alex Day (Clarion University of Pennsylvania)
Sam Hum (Colorado College)
Riley Wagner
 (the University of Arizona)

Eric Av (Gonzaga University)
Hoang Huynh (Georgia State University-Perimeter College)
John Nguyen (University of Minnesota, Twin Cities)

Brandon Dominique's experience: A brief overview of the work that I did at the University of Arizona for their student-led self driving car project, the CAT Vehicle.

In 2019 summer, I worked on the CAT Vehicle REU at the University of Arizona. My group created a specialized language that will be used at local Tucson elementary schools to code Lego EV3 robots and the CAT Vehicle (an autonomous vehicle). I want to thank the University of Arizona, the NSF, and the other members of CAT Vehicle and HF projects.

Alex Day's experience: This video outlines the project that I was a part of during the University of Arizona's CAT Vehicle REU.

Video experience of Riley's project on the use of a domain-specific modeling language (DSML) designed in WebGME — a server-based generic modeling environment. The language mirrors the curriculum of non-expert programmers and incorporates the use of sensor data, which is to be deployed on both the Cognitive and Autonomous Test Vehicle (CATVehicle) and Lego EV3 robots. However, maintaining safety within these DSML-designed CPS can be an issue.

Autonomous driving has captured academic and public imaginations for years. This project attempts to implicitly teach a car to follow the best optimized route to a destination while avoiding obstacles. The car is taught the optimized route based on a reward/penalty system via reinforcement learning. Using only the distance away from the nearest object and the angle of said object, the car avoids collisions and learns the optimized route in computer simulated worlds.

Pages