In recent years, we have seen rapid advances in technology and AI, allowing for unprecedented insights into human behavior and powering human-to-machine interactions in entirely new ways.
Sensor technologies have become more accessible and are easier and more cost effective to deploy. Combined with advanced compute processing power, new machine learning methodologies and access to massive amounts of data, we are entering a new era of intelligent systems – systems that assess complex human behaviors and states, and enable us to lead safer, healthier, happier and more connected lives.
We call this Human Insight AI: technology that understands, supports and predicts human behavior in complex environments.
Applying novel deep learning methodologies and massive amounts of data, our Human Insight AI draws on multiple data sources in an unobtrusive, non-invasive manner. This provides real-time analysis with a high level of computational accuracy. Designed to be reliable, scalable, robust and able to run on any platform.
Our multimodal approach is unique: we are the first to combine different sensor technologies to discover more than what meets the eye.
Through multiple sensors, we measure multiple different aspects of human behavior: from our reactions and interactions to the objects we use and the activities we engage in. These insights can then be deployed in any area that benefits from a greater understanding of human behavior. Both our leading Driver Monitoring and Interior Sensing software and our solutions for Behavioral Research are built around many of these capabilities, enabling them to offer unparalleled insight into human behavior.
Our Human Insight AI consists of a number of different core technologies:
Eye tracking is the art of capturing and measuring a person’s gaze and eye movements. By using sensors that detect the human eye in a given environment, we are able to collect and analyze the data to assess a person’s alertness, attentiveness and focus in order to create a clear picture of overall mood and awareness level.
Smart Eye company Affectiva is pioneering Emotion AI – software that detects nuanced human emotions, reactions, interactions and complex cognitive states. Using sophisticated computer vision and machine learning techniques, our algorithms scientifically measure and analyze the emotions humans are innately programmed to express through our facial movements.
How a person moves can tell us a lot about their intentions and actions in any situation. With the insight gained from Smart Eye’s algorithms for activity and body pose tracking, we are able to get a more comprehensive understanding of human behavior. In interior vehicle environments, identifying people’s activities and body positions can offer valuable information about their safety and well-being.
Our object detection algorithms gives us the ability to connect how people act to the objects they surround themselves with. By including object detection in the broader analysis of human behavior, we are able to not only increase our understanding of how humans act, but why they act the way they do. In vehicles, object detection can even save lives by identifying children and pets left behind in cars.
The foundation of our Human Insight AI and all our core technologies are images captured by a camera or other optical sensors and algorithms. To train and validate these algorithms, we use machine learning and computer vision methods.
Computer vision is the use of computer algorithms to understand the contents of an image or a video. Our technology, like many computer vision approaches, relies on machine learning.
At the core of machine learning are two primary components: (1) data: like any learning task, machines learn through examples, and can learn better when they have access to massive amounts of data, and, (2) algorithms: how machines extract, condense and use information learned from examples.
While the algorithms are the students, the data are the teaching materials from which they learn. At Smart Eye, algorithms and data are combined into a system that is capable of finding and tracking faces, body points and perform a highly accurate, scalable and repeatable analysis of human behavior.
Smart Eye company iMotions has developed the world’s most comprehensive biometric data fusion platform. Combining data streams from different sensors, the iMotions platform integrates and synchronizes multiple sensing technologies –powering holistic human insight.
Through our strong expertise in data collection and annotation, we continuously improve our algorithms’ ability to understand human behavior in complex environments. When training and validating our deep learning algorithms, we expose them to massive amounts of data.
Our data repository is one of the largest of its kind, including over 12 million face videos and 5.8 billion facial frames from 90 different countries.
With more than 28,000 hours of automotive data, we are able to adapt our algorithms to varying camera angles, lighting and other environmental conditions in a vehicle. For every new face we capture, this dataset grows and becomes an increasingly robust foundation for all our deep learning algorithms.
To augment our data sets, Smart Eye uses data synthesis. Our proprietary data synthesis tool not only lets us simulate images, but also label them perfectly without the need for human annotation. By developing our own advanced synthetic data generation capabilities, we can create our own data sets and augment our existing data – enabling us to train our algorithms better and faster than anyone in the industry.
Want to know more about our Smart Eye technology solutions?
Send us your request and we’ll be happy to set up a live demo!
Smart Eye is a ISO9001 certified company, and we are committed to quality. Our global organization is driven by our quality policy: