As automotive safety continues to evolve, the industry faces increasing pressure to deliver systems that are both sophisticated and adaptable, meeting stringent safety standards around the world. Smart Eye’s partnership with OnSemi brings a new dimension to vehicle safety through depth sensing technology, paving the way for a future where vehicles can respond to their occupants in unprecedented ways. In this article, we take a closer look at the depth sensing technology used in Smart Eye’s Driver Monitoring System (DMS) and Interior Sensing AI, as well as how it is transforming safety standards like FMVSS 208 and the upcoming Euro NCAP 2026 requirements.
Traditional 2D imaging, which is commonly used in driver monitoring, provides a flat visual of the vehicle’s interior. While useful, this format falls short in capturing essential spatial data like occupant distance, size, or exact positioning within the cabin. This is where depth sensing comes in. By creating a 3D “point cloud” of the interior, depth sensing provides a more comprehensive understanding of cabin dynamics, capturing vital information about the location and movement of each passenger, object, and surface in the vehicle.
OnSemi’s depth sensing technology, which includes indirect Time of Flight (iToF) sensors, captures both depth and intensity data in real time. Bahman Hadji, Director of Product Management at OnSemi, explains in our recent Smart Eye podcast episode that this technology not only improves spatial awareness within the cabin but also provides high precision under challenging lighting conditions, such as intense sunlight or shadows. By partnering with OnSemi, Smart Eye brings this advanced depth sensing capability to life in vehicles, setting a new standard for accuracy and reliability in monitoring and responding to in-cabin events.
As safety standards evolve, so too must the technology that enables vehicles to protect their occupants. Depth sensing plays a critical role in adaptive restraint systems, which adjust seat belt tension and airbag deployment based on real-time occupant data. This level of responsiveness is essential to meeting FMVSS 208, the Federal Motor Vehicle Safety Standard in the U.S., and the Euro NCAP protocols, which are rapidly setting a global benchmark for vehicle safety.
Smart Eye’s partnership with OnSemi enhances adaptive restraint systems by integrating depth sensing into our DMS and interior sensing solutions. By capturing 3D data on occupant positioning, our systems can determine the optimal airbag deployment and seat belt adjustments needed for different seating arrangements, improving safety outcomes in the event of a collision. In his interview, Bahman highlighted the importance of knowing an occupant’s exact distance from key areas like the steering wheel or dashboard, which allows for precise restraint adjustments that could ultimately save lives.
One of the most significant advances enabled by depth sensing is its role in driver and occupant monitoring accuracy. Smart Eye’s systems, integrated with OnSemi’s depth sensors, can identify where each occupant is in the cabin and how close they are to specific parts of the interior. This data becomes particularly critical in real-world driving scenarios, where adaptive safety features need to operate seamlessly and effectively.
For instance, depth sensing can detect if a driver or passenger is sitting unusually close to the airbag. In such cases, the system can automatically adjust the airbag’s deployment strength to minimize injury risk. This level of accuracy is essential in meeting Euro NCAP’s upcoming 2026 protocol updates, which will require advanced occupant detection for a higher safety rating. With depth sensing, Smart Eye’s systems not only improve vehicle safety but also give automotive manufacturers the tools needed to achieve high scores under these new protocols, making vehicles safer and more appealing to consumers.
Depth sensing technology from OnSemi also addresses a critical limitation of traditional in-cabin monitoring systems: visibility under various lighting conditions. With iToF sensors, our systems can operate effectively even when faced with harsh sunlight, shadows, or low-light environments. By generating a depth and intensity map simultaneously, Smart Eye’s depth sensing technology can “see” through challenging lighting conditions, ensuring that our monitoring systems provide reliable performance under all circumstances.
As Bahman explained in our podcast, iToF allows the sensors to actively filter out unwanted light and create a clear intensity image that captures essential details, such as whether a seat belt is fastened or an occupant is positioned safely. This capability is not only a major improvement over 2D imaging but also a key factor in ensuring vehicles meet global safety standards without compromising the comfort and convenience of the user experience.
This transformation in cabin monitoring is just the beginning. In upcoming articles in our blog series, we’ll explore how depth sensing goes beyond safety to enable intuitive in-cabin interactions, such as gesture-based controls and driver authentication. Imagine a vehicle that can recognize you as the driver, set your seat and mirror preferences automatically, and allow you to interact with infotainment systems through simple hand gestures. Depth sensing is making these features a reality, providing a safer, more personalized, and user-friendly experience for every occupant.
For those interested in a deeper dive into how this transformative technology works, be sure to check out the full podcast episode with Bahman Hadji from OnSemi, where we discuss the capabilities and impact of depth sensing on automotive safety and in-cabin experiences. As we move toward a future of smarter, safer vehicles, depth sensing stands out as a game-changing technology, setting the stage for both innovation and compliance in a rapidly changing industry.
Get in touch with our team to learn more about our solutions and request a live demonstration. In the meantime, download this eBook to uncover how eye tracking is transforming automotive research, driving human-centric mobility, and shaping the future of transportation.