News > i3

Car Smarts: The Future of Vehicle Tech


The computers, sensors and software in cars are getting so smart they may eventually detect whether the driver and passengers are happy or sad, comfortable or uncomfortable, alert or distracted. And as a result, driving automobiles can be made safer and more enjoyable.

“Driver monitoring is extremely important for active safety systems as well as automated systems,” says Phil Magney, founder and principal at VSI Labs, an automotive technology applied research firm-based in St. Louis Park, MN. “It may have been a nice-to-have feature beforehand. Now you can say it definitely is a must-have feature.”

Nevertheless, he says, car occupant monitoring overall remains in a state of flux — particularly regarding user experience.

FROM GAZE TRACKING TO EMOTION SENSING

“Once you have the camera or other sensors around the vehicle, it’s just software that’s taking that information and doing different things with it. And so, you’re going to see an evolution of the vehicle continue ever after you have it,” says Danny Shapiro, senior director of automotive at NVIDIA Corp. in Santa Clara, CA. “You’re going to get updates that will add new convenience features and new sensing capabilities,” he says. “The fundamental shift that’s enabling this is AI (artificial intelligence), and specifically deep learning is the ability to take that data and make sense out of it and analyze it with superhuman levels of detection.”

For example, Shapiro says a camera inside the vehicle could determine a driver’s attentiveness by detecting their eye blink rate and sensing their head pose. These checks could also determine if the driver is close to falling asleep. This could be merged with input from outside sensors that detect a pedestrian preparing to cross the car’s path. And the car may then determine if it needs to issue a collision warning and autobrake sooner than it would otherwise, to give a tired or distracted driver extra time to react, Shapiro says.

Emotion-sensing technology could lead the car to take actions proactively, such as playing certain music or adjusting cabin temperature. Or it may make suggestions and engage in a conversation with a passenger, for instance, offering to lower a window if lip-reading software senses someone complaining about being hot.

To realize this vision, NVIDIA has brought to market a new low-power (30 watt) computer system-on-a-chip named DRIVE Xavier and a complementary software development kit (SDK) named DRIVE IX (Intelligent Experience) that provides the building blocks for behavior understanding and action based on it.

The setup uses facial recognition technology to unlock the car door for an approaching driver, open the trunk if they are seen lugging grocery bags, and keep a car door locked to prevent someone inside from opening it into the path of an oncoming cyclist, Shapiro says. It will be in new vehicles shortly, he predicts. It was announced at CES 2018 that Tier 1 auto industry suppliers would receive first shipments of DRIVE Xavier early this year.

In fact, AI is a “very pragmatic” way to understand both the driver’s engagement and the context in which he’s operating, says VSI Labs’ Magney. “You can pick up on many different attributes and many different  lements.” But the technology’s need for training data is never-ending and “nothing is ever finished,” he adds.

For Eyeris, a four-year-old Palo Alto, CA, company specializing in deep learning-based emotion recognition software, the pursuit of driver monitoring discussions emerged early to much skepticism as well as interest, says Founder and CEO Modar Alaoui. He first presented “driver monitoring that incorporates emotional distraction as a measure for determining cognitive workload” at the 2014 TU Automotive conference in Detroit, when driver monitoring focused only on eye gaze and head position tracking technology. He adds, “We argued that is not enough to determine attention or distraction because a person can still be looking at the right spot, their eyes are open and their head is still in the right direction, but they could be highly emotionally distracted.”

In 2017, Eyeris began working on emotional AI for monitoring a person’s face and upper body region employing standard (2D) cameras and showcased this technology at CES 2018 in a Tesla Model S demo car equipped with five cameras that track the driver and all passengers.

“We also released an algorithm for action recognition and activity prediction, which uses body tracking as a prerequisite,” Alaoui says. It recognizes, for instance, that the driver is smoking, texting, eating or drinking. Although the driver’s head and eyes are looking at the right place, he might be reaching out to the glove box or trying to open a bottle of water, which of course translates to distraction.

“By mating action, activity and body tracking with face analytics, we derive a wholistic interpretation of the human behavior inside the vehicle,” he says. A higher layer of algorithms called a decision-making AI engine then responds to this interpretation and informs the vehicle — allowing it to passively or actively react, such as by readying a collision avoidance technology to take over. Further, Eyeris’ technology predicts imminent distraction on a second-bysecond basis, Alaoui says.

Beyond safety, Alaoui anticipates Eyeris’ technology could be used in shared self-driving cars to tailor vehicle performance and interior and infotainment features — including the suspension’s ride quality, ambient lighting and streamed content — corresponding to the number of passengers and their genders, ages, activities and moods.

“The future of mobility is going to depend hugely on human-centric data in the ridership economy,” he declares. “This is really going to change transportation as we know it.”

While vehicles like the Tesla Model 3 and the 2018 Cadillac CT6 today have one camera in the cabin to watch the driver alone, “future generations of vehicles are going to have an average of between three and six cameras inside the car” to track all occupants, Alaoui says. He expects the first cars with three cabin cameras to be in dealerships next year, and next-generation cars to include as many as seven cabin cameras.

The company’s demonstration vehicle at CES 2019 will contain 10 cameras, Alaoui notes.

At least one driver-facing camera is a necessity in any Level 3 self-driving car, like the CT6, says Christian Reinhard, vice president and head of customer projects at Elektrobit, a supplier of embedded and connected software and services for the automotive industry, based in Erlangen, Germany. Level 3 requires the car to cede control to a human under certain conditions, and the camera confirms that a person is piloting. However, “you can really do a lot with the camera,” Reinhard proclaims. “You can even detect the heart rate of the passengers using the camera, and other health data.” And this can lead to a variety of health applications inside the car, he adds.

Indeed, emotion-recognition and physiology monitoring are popular subjects among automotive user experience designers now, says Jacek Spiewla, user-experience manager of advanced development at Mitsubishi Electric Automotive America in Northvale, MI, which makes driver monitoring systems for automakers. But scant attention has been paid to “what are we actually going to do with this information. You can’t ultimately determine whether somebody is distracted from the driving task,” he says. There can be indications, “but I really don’t know whether you’re spacing out or not,” he contends.

ID’ING WHO’S WHO AND WHAT’S WANTED

Also at CES 2018, Rinspeed AG — a Swiss car design firm — unveiled its Snap “skateboard and pod” self-driving concept car with another way to identify the vehicle’s passengers and conform it to them: four iris scanners in dropdown screens supplied by Gentex Corp. The biometrics technology identifies each person and personalizes the seat position, HVAC controls, streaming audio and other settings according to user-determined presets. In addition, it facilitates secure access to cloud-based work files or e-commerce.

For in-cabin monitoring, “the holy grail would be one camera that does everything,” says Craig Piersma, director of marketing at Gentex in Zeeland, MI. “That’s just not as easy as it sounds.”

On the other hand, VSI Labs’ Magney says, people are generally unwilling to be watched by a camera. “I’m a little leery of that whole approach, frankly,” he adds.

A further possibility is identifying a driver or passengers by their voice pattern to customize the in-cabin experience, says Daniel Sisco, director of cockpit systems at Renesas Electronics America Inc., in Milpitas, CA. On-screen user-interface choices could be fewer or greater depending on the audience. “That’s well within reach technically,” he says.

Fingerprint identification of drivers is coming, too, and ultimately augments both computer vision and voice biometrics in self-driving cars, says Sunil Thomas, vice president of automotive at semiconductor maker Synaptics Inc., in San Jose, CA. “We think a fingerprint sensor is the entry point,” Thomas says. To begin with, it could be put on the vehicle’s “engine start” button for theft deterrence and to enable user-defined functions — and he expects this to be on the market in 2020.

Fast forward five years, and Thomas foresees voice interactions extending outside the vehicle, as well — cars in effect listening to and conversing with pedestrians. “We have some experimental versions of that currently,” he says.

“Technologies are there and capabilities are there,” concludes Synaptics’ Thomas. But the auto industry must “open its mind to think a little bit differently, to bring these technologies into their cars faster,” he declares.

See the latest in self-driving cars
and vehicle intelligence at CES 2019

Robert E. Calem

Tagged

Related