Inside Ford’s class-leading autonomous vehicle program
Big news was made in early April 2017 when a Navigant Research report was released that ranked Ford Motor Co. as the overall leader in developing automated-driving systems. The report examined 18 companies developing systems, rating them on 10 criteria. (See here). The ten rating criteria were vision; go-to market strategy; partners; production strategy; technology; sales, marketing, and distribution; product capability; product quality and reliability; product portfolio; and staying power.
According to the Leaderboard report, following Ford in developing automated-driving systems are General Motors, the Renault-Nissan Alliance, and Daimler.
Top Automated Driving Companies
- General Motors
- Renault-Nissan Alliance
- Volkswagen Group
- Hyundai Motor Group
Tremendous progress has been made in just the last few years on the development of automated-driving systems. “However, as we get closer to deploying high-level automated driving, everyone involved must now address the remaining questions that are in many ways more difficult to answer than developing the foundational technologies,” said Sam Abuelsamid, Senior Research Analyst at Navigant Research. “The companies that have the resources and expertise to ensure that the automation technologies are robust enough to operate in a broad range of conditions while also supporting business models that bring access to the masses are the most likely to succeed.”
Ford’s Autonomous Vehicle Platform and Virtual Driver System
Ford quickly jumped on the Navigant Research ranking news by publishing a blog post titled, “What it Takes to be a Self-Driving Leader” by Raj Nair, Executive Vice President, Product Development, and CTO. He said that the company is making significant strides in the development of hardware (Autonomous Vehicle Platform) and software (Virtual Driver System). The progress builds upon experience from a fleet of Fusion Hybrid autonomous development vehicles testing in the U.S. since 2013. However, there is still much work to bring SAE Level 4-capable self-driving cars to market by Ford’s publicly stated goal of 2021.
“Integration is key,” wrote Nair. “Our hardware and software platforms need to be integrated into an efficient, high-quality vehicle system. This is a complex task. All the various electrical and mechanical systems must talk to each other. It requires development of redundant actuator controls. Energy management must be optimized, as the computing system requires significant power. The powertrain and transmission must be calibrated with the goal of maximizing fuel efficiency. And, the suspension needs to be tuned to deliver a ride experience that passengers will enjoy.”
“Our plans call for self-driving cars initially to be used in mobility services, such as ride hailing, ride sharing, or package delivery fleets,” Nair added. “To operate the vehicles in a mobility service and provide such features as the user interface and point-to-point navigation, there needs to be a service provider. Plus, somebody needs to own and maintain the vehicles. At Ford, we’ve set up Ford Smart Mobility LLC to build our mobility commercialization strategy and capabilities, including how the vehicles will be managed and operated.”
Latest autonomous vehicle generation
Ford introduced its latest Fusion Hybrid autonomous development vehicle in January, with upgrades to both the Autonomous Vehicle Platform and Virtual Driver System. The latter represents a big leap in sensing and computing power, blogged Chris Brewer, Chief Program Engineer of Ford Autonomous Vehicle Development.
It is made up of sensors (LiDAR, cameras, and radar), algorithms for localization and path planning, computer vision and machine learning, highly detailed 3D maps, and computational and electronics horsepower to make it all work.
Based on current and anticipated technology, Ford engineers are working to build two methods of perception—mediated and direct—into the Virtual Driver System, said Brewer. Mediated perception requires the creation of high-resolution 3D maps of the known environment including locations of stop signs, crosswalks, traffic signals, and other static things. The Virtual Driver System uses its sensors to continuously scan the area around the car and compare—or mediate—what it sees against the 3D map. Direct perception uses the sensors to see the vehicle’s positioning on the road and dynamic entities like pedestrians, cyclists, and other cars. This hybrid approach enables the Virtual Driver System to perform tasks equal to what a human driver could, and potentially, even better.
Data from all three sensor types are fed into the autonomous vehicle’s brain, where information is compared to the 3D map of the environment. The computer, located in the trunk, is the equivalent of several high-end computers, generating 1 terabyte of data an hour.
Accelerated testing and investment
An expanded fleet is accelerating real-world testing on roads in Michigan, California, and Arizona. Ford plans on growing the fleet even more, tripling its size to about 90 cars in 2017. To reach its 2021 launch goal, the company is investing in or collaborating with a number of startups and other companies to enhance its autonomous vehicle development, doubling its Silicon Valley team, and more than doubling its Palo Alto campus.
In perhaps the largest investment announcement, Ford said in February that it is pumping $1 billion over the next five years into Argo AI, a Pittsburgh-based artificial intelligence company, to develop a Virtual Driver System and for potential licensing to other companies (See here). The current team developing Ford’s Virtual Driver System—the machine-learning software that acts as the brain of various autonomous vehicles—will be combined with the robotics talent and expertise of Argo AI.
Ford has also invested in Silicon Valley-based Velodyne to quickly mass-produce a more affordable automotive LiDAR sensor, and in Berkeley, CA-based Civil Maps to further develop high-resolution 3D mapping capabilities.
Ford acquired Israel-based computer vision and machine learning company SAIPS to strengthen its expertise in artificial intelligence and enhanced computer vision. The startup has developed algorithmic solutions in image and video processing, deep learning, signal processing, and classification.
An exclusive licensing agreement with Nirenberg Neuroscience, a machine vision company founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code the eye uses to transmit visual information to the brain. This led to a powerful machine-vision platform for performing navigation, object recognition, facial recognition, and other functions, with many potential applications.
New details emerge at WCX17
During a technical expert panel discussion on automated vehicle systems at SAE International’s WCX17 event in April, Colm Boran, Ford’s Autonomous Vehicle Platform Manager, gave more details on the company’s Level 4 developments. Like most automakers, Ford has launched or is engineering Level 1 and Level 2 automated driving systems for the mainstream market, but at Ford a separate team is “working from the top down to get a true Level 4 vehicle. In our case, our intention is to launch a vehicle without a steering wheel and without pedals in the 2021 timeframe in some type of ride sharing/ride hailing service.”
Ford most likely will skip Level 3 autonomy. “There's a line there for us between Levels 2 and 4…that at the moment we're not concentrating on; that's not to say that we never will, but that's not our approach at the moment,” Boran said.
Map precision is a key part of Level 4 automated-driving systems, continued Boran. “The idea being that these Level 4 vehicles are going to be designed to drive in areas that basically haven’t been driven and memorized, so the maps are much more detailed than what you may be used to on an app on your phone, for example. Using the LiDARs, cameras, radars, and the map, the vehicle can figure out where it is with great precision, down to as low as a few centimeters.”
Ford’s Level 4 vehicle will not be dependent on vehicle-to-infrastructure advancements. “We're not designing our vehicles to depend on that, of course we’ll take advantage of that technology when it proliferates enough into the market that it makes sense.”
Public concerns for safety are heightened with automated-driving cars, especially for those with no steering wheels or pedals.
Ford thinks it can address this concern with a “very capable set of sensors [that] can outperform the human,” said Boran. “We have 360-degree capability, it never gets distracted; it doesn't get tired. The LiDAR and radar [combination], for example, can see better at night than the human can.” Although the autonomous vehicle controller is not yet “nearly as capable as the human brain, we make up for some of that on the sensing side.”
To allay further safety fears, Ford engineers are also building in system redundancies. For instance, there is redundancy in communications, power, and control: “Our vehicles actually have two independent 12-volt systems.”
Another concern for autonomous vehicle technology in general is the ability to operate in inclement weather or on roads with faint or no lane markings. Ford demonstrated more than a year ago at Mcity— the 32-acre (13-ha) mock city and proving ground built for testing of driverless cars on the University of Michigan Campus in Ann Arbor, MI—its technology’s capabilities navigating on the hardest use cases—snow-covered roads.
Said Boran: “We were able to go out and drive through that using the LiDAR and other sensors to pick up the three-dimensional structures—posts, poles, buildings, and so on—and with that map stored on the vehicle, we know precisely where we are. Again there’s a big contrast compared to say a Level 1 system that's just camera-based that is trying to find lane markings on the road. This [precise map] is one of the real big advantages.”