← Back to all posts

Data-Driven Development Process


The AutoX Team @ Feb 26, 2020


In the beginning of our journey, we have been hard at work building the fundamental blocks and tools for our AI Driver: the mapping and perception pipeline, the planning and control stack, the infrastructure to record, replay, and simulate data, and the dedicated compute platform. With the basic features in place, we have evolved into a new principle in guiding our engineering development: Data-driven Development Process, which is also highlighted in our 2019 California DMV report.


Our data-driven development process leverages a large amount of real-world road test data and virtual-world simulation data to define the engineering goal and evaluate our progress. In the past year, we have been practicing this methodology thoroughly. Throughout this development period, our approach has been systematic, and data driven.


Real-world Testing: Breadth and Depth


Our testing and validation of the AutoX Driver have been carried out in two dimensions: Breadth and Depth.


The Breadth of testing aims to gain experience and data in a variety of environments. In doing so, we have scaled to 12 cities globally, each with significantly different road conditions, weather, and driving style. The way to keep learning and improving so our system can be deployed robustly is to give our AI Driver as many robust and varied types of experiences and uncover a wide variety of driving situations as we possibly can.



The Depth of testing aims to rigorously stress test the robustness and reliability of the AutoX AI Driver within the first set of Objective Design Domain (ODD) in which we aim to go driverless. The knowledge gained by the global testing with Breadth has been shared across the network of our AI Drivers, significantly improving the driving performance in our Driverless ODD.


Towards Truly Driverless


For the past 12 months, the majority of testing carried out by AutoX autonomous vehicles in our California office has been focused on the validation of fully driverless performance. Towards this goal, we have built full redundancy of the vehicles. In 2019, we have been mostly testing our Generation 3 configuration, which is the most extensively tested version internally.


In the past year, we have also completed our Generation 4 configuration platform, which was released at CES2020. Generation 4 is spearheading the adoption of automotive grade sensors and solid-state LiDAR to expand the capability of how our autonomous vehicles perceive the world.


Virtual-world Testing


Virtual-world testing, or testing in simulation, is key in our data-driving development process. We have a dedicated R&D team based in our Beijing R&D center to build our simulation platform: xSim. The xSim platform can recreate, modify, and amplify our real-world experience, or generate entirely new situations from scratch. We pursue high quality test data to power rapid iterations and reveal the subtle consequences of any change to the system. The focus of the xSim system is on the truly interesting miles, and insightful analytics platform to reflect and interpret simulation results at high speed.


We have laid the foundation of a deterministic simulation system to deal with all levels of uncertainty in the system, which supports our entire team to have deterministic offline testing for any module, for any team, across our entire organization. Our simulation uses the same programming framework as the AutoX Driver software stack, the same libraries, and similar compute hardware to support seamless integration.


Only after a feature has been validated via simulation, it was tested in a controlled, off-street environment that closely approximates the circumstances of the disengagement. Only after this, and following a series of internal checks, does the feature make its way into the autonomous vehicles. Our road test data is then all recorded and uploaded to our xRay cloud servers to produce log-sim data for regression test, and blend log-sim and virtual world-sim seamlessly to create even more challenging scenarios.


With a series of efficient development and validation tools, we speed up our Continuous Integration cycle to less than 2 days. Our engineers can complete the entire loop of development, modular validation in simulation, system validation in simulation, HIL test (hardware in the loop), VIL test (vehicle in the loop) test, getting evaluation feedback, and commit code merge updates within 2 days. This capability fuels our fast development speed to achieve a continuous exponential growth.


Evaluating Our Progress


Recently, the California DMV has released the 2019 report for autonomous vehicle testing. This report includes disengagements caused by the critical discrepancy, MPI, of the autonomous driving system.


Our founder, Professor X, has many years of experience defining scientific metrics for evaluating AI systems. His research at MIT defines the metrics of scene understanding, which are widely adopted by the computer vision community. As he has always said, “No metrics are perfect. Although MPI across companies is not an apple-to-apple comparison, it is one of the useful indicators to reflect our own progress.”


In the past three and a half years, we have achieved about 15x improvement every year, thanks to the great hard work of our engineering and operation teams. Besides MPI, we measure and track our progress with a full spectrum of AV-specific analysis and metrics, on both safety and comfort level. We’ll share more about these metrics we’ve developed internally over the years in a future post.


We’re pleased with the rapid and meaningful advancements we made in 2019. We will continue to use testing as a key tool in the development of our autonomous driving system, on our way to achieve a superhuman safety capability and democratize autonomy to make transportation accessible to everyone.