We introduce a innovative way to categorize and combine car monitoring system data using engineering design concepts to improve driver behaviour prediction model accuracy. Autonomous driving is one of the most promising industry in travel and transportation. Environmental perception modules are playing crucial roles in automatic driving system and driving behaviour prediction. The modules use various type of sensors to construct the perception of real-time road environment. The popular sensors used includes ultrasonic radar systems, LiDAR, cameras, and internal vehicle SCADA. As a result, sensory systems are directly liable for the quality of collected data which could affect the performance of a driving behaviour prediction system. Our approach enables us to assess the significance of each group of data features on the driver learning behavior prediction model’s success, with a stronger emphasis on engineering considerations rather than being constrained by the limitations or capabilities of specific sensors.
We categorized the 123 collected features in our studies into 7 categories:
Based on the eight developed data feature categories, we proceeded to combine different combinations of these categories to create new datasets with reduced dimensions.
Three-layer LSTM with Autoencoder and MLP units architecture are used as core for the benchmarking model. The system architecture is shown below:
All training was conducted on a GeForce RTX 2070 (8 GB) GPU with a total training time of approximately 27 hours. The project highlights that increased data doesn't necessarily enhance model predictions if participants exhibit diverse driving behaviors. Our results indicate that using our designed Feature Selection Model, improved outcome can be achieved with less amount of participants data. Overall, our featuring model improves the accuracy for driver behavior learning prediction by 5km/h in velocity and 8.1 degree in steering angle with 15% less amount of data.