Connected and autonomous cars are driving a new generation of data center requirements
The automotive industry is entering a new, highly competitive, transitional period where demand for new conveniences, safety capabilities and selling models are driving dramatic change.
The scale and intensity at which automotive OEMs and suppliers must bring innovations to market – while containing costs, mitigating risks, managing product complexity and maintaining compliance – is challenging.
By 2025, connected vehicles will generate $150 billion in annual revenue, grow to 100 million vehicles globally, and as a result transmit over 100 petabytes of data to the cloud per month for currently designed network capabilities and business models. Additionally, by 2025, future automotive services will require 10 exabytes per month, approximately 10,000 times larger than the present volume.
Connected and autonomous vehicles will soon communicate in real-time between each other (V2V) and with infrastructure (V2X) for enhanced safety and convenience. Automotive OEMs and suppliers will also leverage this data in real time to offer new services as well as predict potential vehicle systems failures in advance, before the customer is even aware. Once an industry of pure hardware and adrenalin, automotive design is increasingly dependent upon, and differentiated by, software. Visits to the dealership are being replaced by over-the-air bug fixes. New features or options not initially installed in the vehicle can be downloaded directly as well—again without a visit to the neighborhood dealership.
Such complexity requires significant OTA management, monitoring, and security. Further, software updates and high-resolution maps will require a new approach, one that will leverage cloud-native, microservices, dynamic policy allocation and formal methods for software assurance and security.
As a result, a new generation of artificial intelligence (AI)-enabled data centers will emerge, equipped with new hardware and software architectures that will enable large-scale, real-time analytics and machine intelligence; computer and storage at the edge will be optimized to work seamlessly as an extension of the overall IT system. Standards will play an important role and open architectures will be critical.
Impact on data center infrastructure and management
Over the next five to ten years, connected cars will need to be equipped with very fast (5G) internet access, artificial intelligence and the ability to utilize big data analytics for intelligent driving. Vehicles will be required to connect to both dedicated data center clouds, and various public clouds through edge compute and networks to facilitate the transfer of large amounts of data in real time.
We will see AI in the car become more mainstream to meet the complexity of connected car workloads. Carefully designed and efficient compute and storage in the car will be critical to maximize power usage for driving distance, not IT overhead. Data structures will be simplified with a semantically rich data management framework, and active archiving.
At the forefront of this evolution is Advanced Driver Assistance System (ADAS) development, which itself is introducing disruptive requirements on engineering IT infrastructure – particularly storage, where even entry level capacities are measured in petabytes. Today, ADAS includes features such as automatic braking, collision protection and emergency assistance.
Since many ADAS systems are safety-critical, data capture requirements are high and will increase exponentially the greater the level of automation. As data volumes grow, the limitations of traditional storage and data center architectures become increasingly hard to ignore.
Validation is a key strategic stage of the ADAS lifecyle. Design validation demands exhaustive testing to represent diverse traffic scenarios and environmental conditions, which might include road design, driver and pedestrian behaviors, traffic congestion, weather conditions, vehicle characteristics and variants, security and more. The ability to deliver performance at scale is key.
The SAE International has defined different levels of automation and most modern cars today have features that are at level two or three. Today’s SAE level three ADAS projects have already outstripped legacy storage solutions, and with level four and five projects around the corner, the need for storage solutions optimized specifically for high performance, high concurrency with massive scalability is evident.
For example, just one front-looking radar sensor can generate 2,800 Mbits of data per second. For SAE level three automation, up to 200,000 km of sensor data is commonly required. That’s over 3,300 hours of data when captured at 60km/hr. That’s 4.2 PB for just one sensor! Cars today already have greater than 10 sensors on average. In the future, when cars evolve to SAE level five, fully autonomous operation, you can expect more sensors, and more data to be captured per sensor (~240M Km) – roughly 1000x more data per sensor.
Another key challenge of ADAS development is contractual and regulatory commitments surrounding test data retention. Keeping tens to hundreds of petabytes of data in high performance storage is certainly a requirement during the simulation and validation phases. But this data must be retained for multiple decades, and with service contracts commonly mandating restoration and re-simulation times measured in days, traditional archive solutions such as tape and cloud are simply not viable options.
Therefore, ADAS validation architecture needs to be future-proofed by enabling storage capacity to be upgraded seamlessly, linearly and non-disruptively, without impacting storage performance.
Because of time-to-market requirements, as well as the high cost of failure, it is vital to have a shared, reliable and effective ADAS infrastructure for consistent development and validation. We’re talking about a staggering amount of data that will require unprecedented storage capacity, power, connectivity and intelligence.
As autonomous vehicle development advances along the SAE scale, performance requirements will be hard to accurately forecast; it is therefore imperative that storage performance scales predictably with capacity, thereby future-proofing the solution investment.