It’s no surprise that a future with self-driving cars and passengers as co-drivers is fast approaching. Yet while automakers race to be the first to bring a fully autonomous vehicle to the market, there is also a growing focus on the driver.
Cars that are 100 percent autonomous (and affordable) are still decades away from hitting the road. In the meantime, semi-autonomous cars must learn to better understand the driver. This can be facilitated using machine learning and computer vision inside the car. Below are three areas that need improvement in the in-cabin environment to enable an immersive driving experience, regardless of whether the driver is actively driving or is cruising in autonomous mode.

1. Driver recognition

Carpooling will begin to take on a new meaning as the level and sophistication of autonomous vehicles improves. Users will begin sharing vehicles in an effort to become more eco-friendly and reduce the economic costs of owning a vehicle that spends most of its time idle. This is where the importance of driver recognition and the convenience of personalization come in. Using computer vision technology, facial analysis of the driver and/or passenger can be used to determine who out of a pool of drivers is sitting behind the wheel. Going one step further, deep learning and AI capabilities will allow the vehicle to access information about the person who is driving. Once the AI recognizes the occupants, the car will be able to automatically adjust to the driver’s personalized settings, such as temperature, seat position, side and rear view mirror adjustment, infotainment, and radio volume.
Driver data such as age and gender will play an important role in the future of connected and semi-autonomous cars. Vehicles with such capabilities will be able to offer targeted content that’s relevant to the car’s present occupants. For example, the AI will present points of interest relevant to the driver demographics (e.g., nearby child-friendly restaurants for cars with children in them) on the heads-up display (HUD). Analyzing driver demographics in real time will also ensure the radio or music streaming service (such as Spotify, Pandora, Apple Music) displays the most relevant ads.

2. Driver awareness

Distracted driving is the number one cause of car accidents, and drowsiness is one cause of distraction that should not be overlooked. The National Highway Traffic Safety Administration conservatively estimates that 100,000 police-reported crashes are the direct result of driver fatigue each year. This results in an estimated 1,550 deaths, 71,000 injuries, and $12.5 billion in monetary losses. Using computer vision for iris (gaze) tracking, eye openness detection, eyelid brink rate tracking, and head pose detection, in-car sensing technology can determine the driver’s drowsiness and inattentiveness in real time. If the driver is dozing off behind the wheel, the car could sound an alarm to wake the driver up or switch into autonomous driving mode to save their life and protect other drivers on the road.
This concept is not completely new, as automakers are already installing cameras inside the car to evaluate the driver, but cameras are not enough. The true magic happens when you couple embedded computer vision with deep learning software, both of which can detect the driver’s state and analyze it in real time — locally, in the car. This enables the in-car sensor to track and recognize more than just the driver’s eyelids. The AI can better understand the specific driver’s habits and features. If the person tends to blink a lot normally, the car will not display an alert for drowsiness but will rather understand this as natural behavior. Also, some people may naturally have eyes that don’t open very widely, and the car will learn this during facial analysis and know not to alert for drowsiness unecessarily.
While the car can learn to avoid sending false alerts for drowsiness, it is equally important for the car to learn the difference between driver inattentiveness and actual drowsiness. Head-pose tracking and gaze detection will help determine if a driver is focused on the road or is distracted and will provide different alerts for a driver who is distracted (such as looking at their phone) and one who is falling asleep.

3. Driver interaction

Today’s complicated infotainment systems are a distraction in and of themselves. Touchscreen displays look sleeker and reduce dashboard clutter, but they shift the driver’s attention from the road because drivers are focused on locating and tapping the touchscreen to navigate the complex menu-driven commands. Five seconds is the average time your eyes are off the road while texting, according to our research. When traveling at 55 MPH, that amounts to covering the length of a football field blindfolded. Drivers need more natural and less distracting methods to interact with the infotainment system, like touch-free gesture control.
Simple gestures that are tightly coupled with the functions they control (such as raising your pointer finger to your lips to signal mute or swiping your hand to the right to answer an incoming call, left to decline) minimize cognitive load, alleviating the friction of in-car systems and minimizing driver distraction. BMW has already implemented gesture control in its 7 Series to improve how drivers interact with the infotainment applications and features.
While technology continues to improve the driving experience, it is equally responsible for the increase in distracted driving — case in point, everyone reading this article has likely been guilty of texting and driving at some point. Now it’s up to new technologies, like in-car sensing, computer vision, and AI to combat distracted driving and ensure our safety until fully autonomous vehicles hit the market.
Gideon Shmuel is the CEO of EyeSight Technologies and an expert in computer vision and gesture recognition technologies.