Sensing a new reality for autonomous vehicles
How will human beings handle decorum in the era of driverless cars? Who gets the power and prestige to say “Do you want me to pull this car over RIGHT NOW?” when the occupants are misbehaving? Will there have to be a whole new set of “unwritten rules” to govern passenger behavior inside the cabin?
Currently, most people would acknowledge the fact that the driver generally gets to call the shots inside the car, starting with the number one priority—which radio station to turn on. And, heading to the car, if you were the first to yell “SHOTGUN,” there is a universal recognition that you are taking on certain secondary responsibilities towards the driver and passengers.
How will all that change when cars start to drive themselves?
When the first fully actualized robo-car gets the green light, the initial application is expected to be in mobility services—a term that covers everything from ride-hailing companies such as Uber and Lyft to traditional taxis and buses. These companies will send their driverless vehicles to established “routes” where countless miles were already driven to provide well anticipated performance and safety. This is where group dynamics of a new cabin reality perhaps will come in.
The current reality is that the taxi driver, the bus driver, or your Mom or Dad have all been responsible for monitoring the conditions and actions inside a vehicle. However, in the driverless car era, that responsibility for “passenger awareness” will have to be delivered by technology.
The good news is that the technology exists. What you drive today includes a large number of sensors throughout the vehicle that perform various functions for both safety and comfort. But it is also now possible to build from the ground up a single, all-encompassing sensor that works with automotive hardware and software. This technology works together to protect drivers and passengers by constantly scanning and tracking occupants and objects anywhere in the vehicle. It is possible now to assimilate a car’s seatbelts, airbags, and other safety systems to sound immediate alerts from just one sensor monitoring the entire interior.
This cutting edge sensor technology provides real-time, comprehensive information on occupancy status based on three interconnected layers of information: video image recognition (2D), depth mapping (3D), and micro-to macro-motion detection. The sensor detects the location and physical dimensions of each occupant, and can identify the difference between a person and an inanimate object, and can even detect presence in the vehicle when the sensor has no direct line of sight. This enables the provision of hard data on vehicle entry and exit times, intruders, vandalism, and violence—and even alerts on items left behind.
The ability to optimize all of the capabilities of many sensors into a single unit in an important trend, one that empowers autonomous vehicle manufacturers to build safer cars, and at a lower cost, by eliminating the need to install multiple sensors throughout the car. A single, multi-layer sensor that can securely monitor, analyze and communicate everything going on INSIDE the car will be an important contribution to safety and in-car comfort of autonomous vehicles.
For more information on Guardian, visit www.guardian-optech.com