AI based Vehicle Vision Perception System trains AI for suppliers
2021/08/12 | By CENSTraining a self-driving vehicles' AI platform requires datasets, however, not all companies have the means to create such datasets themselves. This is where the "AI based Vehicle Vision Perception System," developed by the Institute for Information Industry (III) comes in.
Hsu Ting, a senior planner at the III Smart System Institute Emerging Markets and Planning Service Centre, told CENS during an interview at the Taipei AMPA show that the datasets they use are primarily a fusion of images and LiDAR point clouds. Regarded as Taiwan's first "Formosa Database" for autonomous driving development, Hsu said the team at III designed the platform and enabled the use of a semi-automatic labeling system once they had manually labeled the images and LiDAR point clouds. The efforts can relieve clients from needing to direct resources and time into training their own AI systems.
To properly train an AI, such datasets could be designed according to the local region, Hsu said. III had incorporated the semi-automatic labeling system to allow companies to change and adjust labels according to localized culture or environmental factors. For instance, the Formosa Datasets would reflect more scooters on the roads in Taiwan than in some European countries, or lack data for snowy road conditions, as Taiwan is mostly in a subtropical climate. Having collected data for over two years, Hsu said the AI could identify diverse road objects in complex environments for FCW, PCW, BSD, and LDW development.
News of pedestrians or scooters getting caught under a bus or large-sized trucks is unfortunately common in Taiwan, which Hsu attributes to a higher population and traffic density. The III-designed system is used conjunctively with four high-resolution cameras stuck to four points on the vehicle, locations that can assist the driver to avoid scooters or pedestrians caught in the blind spots when turning or driving. The possibilities are numerous with the system: Hsu says they have also worked with a local supplier to design an automatic braking system should the driver maintain course into driving into a scooter rider or pedestrian in the blind spot.
However, in order to transfer data from high-resolution images without any latency, which can be fatal for vehicle-oriented AI if the system cannot keep up, the system requires hardware that can sustain high-speed network and data transfers. Hsu said they worked with local suppliers and turned the Nvidia chip they were using to adhere to specifications for automotive use. The automotive-spec hardware, designed with ADLINK, is featured in its IPC model and was launched in June.