Design And Realization Of Experimental Autonomous Driving Sy

Design And Realization Of Experimental Autonomous Driving System Base

Design and realization of an experimental autonomous driving system based on a slot car set is presented in this paper. The system employs a wireless camera for track information acquisition, which is transmitted to a control computer. A backpropagation (BP) neural network controller is developed to emulate human decision-making, enabling the slot car to autonomously navigate the track by adjusting the DC motor’s voltage according to real-time track conditions. The hardware platform comprises six subsystems: the slot car set, track information acquisition, track information transmission, data processing, motor control, and speed measurement. The software platform includes modules for image processing, feature extraction, neural network training, and control signal generation. Experimental results demonstrate the system's effectiveness, with the autonomous slot car successfully completing races with high frequency of victories compared to manual operation.

Paper For Above instruction

Autonomous driving has become a pivotal area of research within the context of Intelligent Transportation Systems (ITS), aiming to enhance safety, efficiency, and convenience by reducing human intervention in vehicle operation. This paper details the design and development of an experimental autonomous driving system utilizing a slot car setup, serving as a scaled and controlled environment to simulate and test key principles of autonomous navigation.

The core concept revolves around equipping a slot car with sensory and control modules that enable its operation under computer control. The system’s primary sensing method involves a wireless camera mounted on the slot car, which captures real-time images of the track ahead. These images are transmitted wirelessly to a central control computer, which processes the data to extract critical features indicative of the track’s curvature, straight segments, or deviations. The processing of this visual data involves converting greyscale images to binary images, detecting connected components, identifying marks, locating shape centers, and calculating regression line angles to accurately characterize the track’s geometry.

Once the track features are extracted, they serve as input parameters for a neural network controller. The choice of a backpropagation neural network stems from its capability to approximate complex nonlinear relationships, making it well-suited for interpreting visual signals and controlling the vehicle’s motor in real-time. The neural network’s input layer comprises nine neurons, accounting for the number of identified track features and the current speed, while a single output neuron provides the voltage instruction for the slot car’s DC motor. The hidden layer contains five neurons, optimized to balance complexity and computational efficiency. The network is trained on data collected under manual control, mimicking a human driver’s decisions, to establish a mapping from visual features to control actions.

During training, data comprising features such as the number of marks, shape center coordinates, regression line angles, and playback speeds are normalized and stored. An algorithm selects the most representative data to refine the training set, ensuring the neural network captures the essential control strategies. Validation involves comparing network outputs against known control signals, confirming the network’s capacity to generalize to new track configurations.

The hardware platform supporting the system includes a track of 7.4 meters with guiding grooves, guiding the slot car’s movement and providing a physical interface for the system. The car is powered and controlled via a DC motor, with its speed regulated by adjusting the applied voltage, controlled by signals derived from the neural network and processed through data acquisition hardware. The system features a speed sensor on the slot car, transmitting speed data wirelessly back to the control computer, which integrates this information into control decisions.

Software development focuses on modules to acquire, process, and interpret visual data, train the neural network, and generate control commands. Image processing algorithms convert raw images into feature vectors, which are then fed into the neural network. The network, once trained, predicts the appropriate control signal in response to real-time visual inputs. The control logic modulates the motor voltage, thereby adjusting the slot car’s speed and ensuring accurate tracking along the curve or straight segments.

Experimental tests compare automated control with manual operation, with the autonomous slot car achieving a higher win rate—winning 18 out of 20 races—and maintaining an average speed of approximately 2.5 meters per second. These results validate the effectiveness of the neural network controller and the overall system architecture in achieving autonomous navigation on a predefined track. The successful implementation demonstrates the feasibility of using neural networks for autonomous driving applications, particularly within scaled or controlled environments conducive to iterative testing and development.

In conclusion, the developed autonomous driving system integrating wireless sensing, computer vision, neural network control, and automated actuation offers an effective platform for research and demonstration of autonomous navigation principles. While the system operates within a simplified environment, its design principles can be extended to more complex scenarios, including real vehicles. Future work may involve incorporating additional sensors, advanced machine learning models, and more sophisticated control strategies to improve robustness and adaptability of autonomous driving systems across diverse conditions.

References

  • 1. Bertozzi, M., Bombini, L., & Broggi, A. (2009). The VIAC Challenge: Setup of an Autonomous Vehicle for a 13,000 km Intercontinental Unmanned Drive. R. VisLab, University of Parma, Italy.
  • 2. Kong, Qingfu. (2011). Development of Image Analysis and Design of the Neural Controller for the Laboratory Experiment “Autonomous Driving”. D. University of German Defence Force, Germany.
  • 3. Kay, F., & Bergholz, J. (2006). Driver Assistant Systems. Institute of Automotive Engineering, University of Braunschweig, Germany.
  • 4. Yi, K., Woo, M., & Kim, S. (1999). An Experimental Investigation of a CW/CA System for Automobiles Using Hardware-In-Loop Simulation. Proceedings of the American Control Conference, USA.
  • 5. Khatib, O. (2004). Real-time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research, 5(1), 90-98.
  • 6. Thrun, S., et al. (2006). Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics, 23(9), 661-692.
  • 7. Bojarski, M., et al. (2016). End to End Learning for Self-Driving Cars. arXiv preprint arXiv:1604.07316.
  • 8. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
  • 9. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
  • 10. Chenyi, C., et al. (2015). Deep Driving: Learning Affordance for Direct Perception. IEEE International Conference on Computer Vision (ICCV).