Intel and Mobileye begin autonomous fleet in testing in Jerusalem
The first phase of the Intel and Mobileye 100-car autonomous vehicle (AV) fleet has begun operating in Jerusalem. The technology is being driven on the road to demonstrate the Mobileye approach and technology, to prove that the Responsibility-Sensitive Safety (RSS) model increases safety, and to integrate key learnings into the products and customer projects. In the coming months, the fleet will expand to the U.S. and other regions.
The system is designed to meet important goals of safety and economic scalability from the beginning. Specifically, it targets a vehicle that gets from point A to point B faster, smoother, and less expensively than a human-driven vehicle; can operate in any geography; and achieves a verifiable, transparent 1000 times safety improvement over a human-driven vehicle without the need for billions of miles of validation testing on public roads.
Jerusalem was selected for a testing environment in part because Mobileye is based in Israel. In addition, the location is ideal for demonstrating that the technology can work in any geography and under all driving conditions. Jerusalem is known for aggressive driving. The roads are not perfectly marked, and merges are complicated. People do not always use crosswalks. An autonomous car cannot travel at an overly cautious speed, congesting traffic or potentially causing an accident. It must drive assertively and make quick decisions like a local driver.
This environment has allowed the testing of cars and technology while refining the driving policy as testing progresses. Many goals need to be optimized, some of which are at odds with each other: to be extremely safe without being overly cautious; to drive with a human-like style (so as to not surprise other drivers) but without making human errors. To achieve this delicate balance, the Mobileye AV fleet separates the system that proposes driving actions from the system that approves (or rejects) the actions. Each system is fully operational in the current fleet.
The part of the driving policy system that proposes actions is trained offline to optimize an assertive, smooth, and human-like driving style. This is proprietary software developed using artificial intelligence-based reinforcement learning techniques. Just like a responsible human driver, to feel confident enough to drive assertively, this “driver” needs to understand the boundary where assertive driving becomes unsafe. To enable this important understanding, the AI system is governed by a formal safety envelope that is called Responsibility-Sensitive Safety.
RSS is a model that formalizes the common-sense principles of what it means to drive safely into a set of mathematical formulas that a machine can understand (safe following/merging distances, right of way, and caution around obstructed objects, for example). If the AI-based software proposes an action that would violate one of these common-sense principles, the RSS layer rejects the decision.
In other words, the AI-based driving policy is how the AV gets from point A to point B; RSS is what prevents the AV from causing dangerous situations along the way. RSS enables safety that can be verified within the system’s design without requiring billions of miles driven by unproven vehicles on public roads. The fleet currently implements Mobileye’s view of the appropriate safety envelope, but Mobileye has shared this approach publicly and looks to collaborate on an industry-led standard that is technology neutral (i.e., can be used with any AV developer’s driving policy).
This video shows an Intel and Mobileye autonomous test car driving in Jerusalem. Using a 360-degree view made possible by eight cameras, an Intel Mobileye autonomous vehicle successfully maneuvers on busy Jerusalem roadways. Among the driving skills the vehicle displays are lane changes in various dense traffic scenarios. (Credit: Intel Corporation)
During this initial phase, the fleet is powered only by cameras. In a 360-degree configuration, each vehicle uses 12 cameras, with eight cameras providing long-range surround view and four cameras used for parking. The goal in this phase is to prove that a comprehensive end-to-end solution can be created from processing only the camera data. An end-to-end AV solution is characterized as consisting of a surround view sensing state capable of detecting road users, drivable paths, and the semantic meaning of traffic signs/lights; the real-time creation of HD-maps as well as the ability to localize the AV with centimeter-level accuracy; path planning (i.e., driving policy); and vehicle control.
The camera-only phase is the strategy for achieving what is referred to as “true redundancy” of sensing. True redundancy refers to a sensing system consisting of multiple independently engineered sensing systems, each of which can support fully autonomous driving on its own. This is in contrast to fusing raw sensor data from multiple sources together early in the process, which in practice results in a single sensing system. True redundancy provides two major advantages: The amount of data required to validate the perception system is massively lower (square root of 1 billion hours vs. 1 billion hours); in the case of a failure of one of the independent systems, the vehicle can continue operating safely, in contrast to a vehicle with a low-level fused system that needs to cease driving immediately. A useful analogy to the fused system is a string of Christmas tree lights where the entire string fails when one bulb burns out.
The radar/LiDAR layer will be added in the coming weeks as a second phase of the development, and then synergies among sensing modalities can be used for increasing the “comfort” of driving.
The end-to-end compute system in the AV fleet is powered by four Mobileye EyeQ4s. An EyeQ4 SoC has 2.5 Terra OP/s (TOP/s) (for deep networks with an 8-bit representation) running at 6 W. Produced in 2018, the EyeQ4 is Mobileye’s latest SoC and this year will see four production launches, with an additional 12 production launches slated for 2019. The SoC targeting fully autonomous is the Mobileye EyeQ5, whose engineering samples are due later this year. An EyeQ5 has 24 TOP/s and is roughly 10 times more powerful than an EyeQ4. In production, Mobileye is planning for three EyeQ5s to power a full L4/L5 AV. Therefore, the current system on roads today includes approximately one-tenth of the computing power that will be available in the next-gen EyeQ5-based compute system beginning in early 2019.
Professor Amnon Shashua, Senior Vice President of Intel Corp. and the Chief Executive Officer and Chief Technology Officer of Mobileye, said, “The Mobileye-Intel approach is contrary to industry common practice in the field, which is to over-subscribe the computing needs during R&D (i.e., 'give me infinite computing power for development') and then later try to optimize to reduce costs and power consumption. We, on the other hand, are executing a more effective strategy by under-subscribing the computing needs so that we maintain our focus on developing the most efficient algorithms for the sensing state, driving policy and vehicle control.
“We certainly have much work ahead of us, but I’m extremely proud of the Mobileye and Intel development teams for their hard work and ingenuity to enable this first significant step. Our goal, in support of our automaker customers, is to bring this system to series production in L4/L5 vehicles by 2021.”