Why AV development is a case of quality, not quantity
With autonomous vehicle technology comes added complexity in a number of areas—one of which being in the test and evaluation phase. In the world of conventional vehicle development, many manufacturers have been of the opinion that more test miles equals increased levels of safety.
The same approach has been taken by some with autonomous vehicles and, according to a study by Rand Corporation, it will take 11 billion miles of test driving to prove that an autonomous vehicle has a fatality accident failure rate 20% better than a human driver failure rate. The report also said that that amount of travel distance would take up to 500 years of test driving, for 100 cars driving 24 hours a day, 365 days a year—so clearly alternatives need to be sought.
Enter Fortellix, which has developed a detailed set of software programs under the Foretify Technology banner. The key aim is to clarify the measurable safety of autonomous vehicles, working on the assumption that quality is better than quantity. The main beneficiaries of the approach are claimed to be industry-wide, with everyone from consumers and developers to insurance companies and regulators gaining a quantifiable confidence in safety that is needed for the deployment of autonomous vehicles.
“We originally used the approach in the semiconductor industry and are now applying the same base technique for autonomous vehicles,” explained Ziv Binyamini, CEO and Co-founder, Fortellix. “The mission is measurable safety and looking to apply the same kind of automation and metrics proven in another industry—moving from quantity of miles to quality of coverage.”
How Foretify works
The main objective of the Foretify software is to deal with the complexity involved in validating and verifying autonomous vehicles in the equally complex environments in which they need to operate.
“There are combinations of combinations of situations—what we call scenarios in the autonomous vehicle space—that ultimately needed to be created, exercised, and evaluated—and finally rolled up into metrics,” said Binyamini. “We believe the number of scenarios could be in the hundreds of millions, and there are many categories, for example: on- and off-road hazards; different driving scenarios based on the location; weather conditions; pedestrians; cyclists; and other elements in the urban environment.”
One of the main challenges for developers is the automation of core scenarios for autonomous vehicles, when there are already so many combinations of combinations of situations to consider.
“It is extremely difficult for the human brain to think of all the specifics scenarios and their variations especially when these combinations of combinations are present,” said Binyamini. “What’s critical is to automate the generation of those so they can be exorcised and you can validate, monitor, and aggregate all of the results into a dashboard that tells you where you are and guide you what to do next.
Achieving the desired quality means that a supplier meets a specified goal of scenario coverage (in the region of 98% or 99%) using the right combination of development techniques. Binyamini describes what is possible with Foretify as a level above traditional computer simulation but also a system that integrates all areas of vehicle evaluation.
“In autonomous vehicles, simulation is one of four categories of testing platforms: virtual testing; real-world physical testing; a test track with ‘urban environments and scenarios;’ and X in the loop,” he explained. “What we are proposing is something that covers all of those four categories in order to get the required level of confidence in the product.”
Meeting customer demands
Given the wide spread of manufacturers entering the autonomous vehicle market—from startups to established mainstream automotive manufacturers, there are wildly different starting points for the Foretify solution. Some will have no data, while others will have already identified scenarios via, for example, accident databases. These scenarios, however, only just scratch the surface of what needs to be covered with autonomous vehicle development.
“Rather than look at specific situations, we encourage companies to look at the category that it represents,” said Binyamini. “If we look at a car approaching a junction in an urban environment, there could be a yellow light, potentially a tailgater, and a pedestrian about to cross the road, which is wet and potentially slippery. So you have to make a decision about how to react. Do you stop the vehicle and risk being hit, or do you go through the light? All of these factors contribute to the scenario, and there are different outcomes for each one, depending on all of the variables described.”
Foretillex encourages clients to define the category of scenarios using what it calls a measurable scenario description language (MSDL). It then uses automation to generate everything possible in that category with respect to up to 100,000 specific scenarios that can feature in a single category, as long as they capture the behavior of that category.
“We are reducing the burden on our customers to ensure safety,” explained Binyamini. “There could be up to 1000 categories versus hundreds of millions of scenarios and, once they are defined, we can then define how they are mixed and combined. This process dramatically lowers to complexity of what the test and development engineers have to capture. It also makes scenarios much more extendable, reusable, and adaptable for similar environments.”
Foretify for the future
It’s very early days for the software system. Binyamini says that Foretellix is working with two major autonomous vehicle OEMs who can’t be named, as well as a number of suppliers to the market. However, the company is convinced it is a service that will help companies, regardless of the amount of experience they have in the automotive sector.
“The number one benefit to users of what we offer is the ability to improve complexity management,” he stated. “They all have a number of tests they’ve developed on their own, but they don’t have the measurement systems, so they don’t know how far they have gone down the development route. Nor do they know how to deal with the edge cases and the thousands of scenarios.
“In our discussions with customers, very quickly they ask how they specify the scenarios (using the categories) and also what they need to measure and how to evaluate it—information that is embedded in our system,” he added. “So it is clear they want to be able to improve their development processes and we believe we offer the best solution. We can specify both the categories of scenarios they need to test for and the combination and the measurement system at the same time, in an integrated way.”