Sensor Trends

V&S

Vision & Sensors


Taking a Cue From

Biolog y

We can replicate the efficiency of how our eyes work with a new form of vision capture called event-based vision.
By Luca Verre

The use of machine vision in industrial automation applications continues to increase as companies look for gains in productivity, efficiency and safety. Market forecasters estimate that the total market for machine vision will reach more than $18 billion by 2025, up from about $10 billion today. High performance cameras and sensors, combined with advanced computing techniques, are enabling machines to dramatically improve the operation of industrial tasks such as manufacturing and quality verification.

And, machine vision becomes even more important in the Covid era as we look for ways to further automate and reduce human interaction for many functions.

While today’s machine vision systems can help automate a wide range processes, including many specifically related to quality (e.g. inspection, counting, predictive maintenance), most systems rely on a form of image capture that was invented more than a hundred years ago. In the 1880s, Eadweard Muybridge accidently created animated image capture, which led to cinema, when exploring animal motion. His technique replicated images in a repeating sequence of frames to create a sense of movement - and frame-based imaging was born.

Frame-based techniques have endured for years, with improvements in performance and accuracy that made it suitable for many industrial use cases. But it was really developed to create images that please the human eye, not for efficient use by machines. In a frame-based video an entire image (i.e. the light intensity at each pixel) is recorded at a pre-assigned interval, known as the frame rate. While this works well in representing the ‘real world’ when displayed on a screen, recording the entire image at every time increment oversamples all the parts of the image that have not changed. In other words, because these traditional camera techniques capture everything going on in a scene all the time, they create far too much in unnecessary data—probably up to 90% or more in many applications. This taxes computing resources and complicates data analysis.

Focusing only on what changes

What’s really important in industrial use cases of machine vision is only what changes in a scene. We call that “events.” By taking a cue from human biology, we can replicate the efficiency of how our eyes work with a new form of vision capture called event-based vision.

Event-based vision is based on an approach to computing known as neuromorphic computing, which uses the brain as a model for embedding more intelligence and efficiency in how machines process information. Applying that to machine vision, it can significantly improve efficiency—both in terms of the amount of data collected as well as the power required.

If you think about how the brain and eye work, they must deal with a massive amount of visual data in every waking moment. Evolution’s frugal nature led to the emergence of shortcuts to cope with this data deluge. For example, the photoreceptors in our eyes only report back to the brain when they detect change in some feature of the visual scene, such as its contrast or luminance. It is far more important to be able to concentrate on movement within a scene than to take repeated, indiscriminate inventories of its every detail. In comparison to a frame-based camera, where 120 frames per second would be considered high performance, the human eye samples changes at up to 1,000 times per second but does not record the primarily static background at such high frequencies.

Event-based vision can be integrated as part of a modern machine vision system and customized for specific uses and applications.

How Event-Based Vision works

Event-based vision applies that same approach to machines, changing the frame-based paradigm altogether. An event-based sensor facilitates machine vision by recording changes in the scene rather than recording the entire scene at regular intervals. Instead of integrating the number of photo-generated charges at every pixel after a pre-defined interval (i.e., synchronously), each pixel in an event-based sensor instead works independently (i.e., asynchronously). By injecting intelligence even beyond the sensor, at the very edge of each pixel, this enables factory managers to extract meaningful insights out of a scene, at extreme speeds, and data efficiency while ensuring optimal overall solution costs.

So, unlike frame-based vision, an event-based sensor’s array of pixels doesn’t have a common frame rate. In fact, there are no frames at all. Instead, each pixel is triggered once the light falling upon it has changed by a threshold set by the user. If the incident light isn’t changing (for example, in the background of a camera view of a packaging line) then the pixel stays silent. If the scene is changing (for example, an object passes through it), the affected pixels report the change. If many objects pass, all the affected pixels report a sequence of changes.

With this approach movement is captured as a continuous stream of information and nothing is lost between frames.

Event-based vision offers many advantages, including a significant reduction in the amount of data processed (up to 1000x less), allowing it to be implemented in systems with much less computational power than a high-end frame-based machine vision system. It also requires less power to operate, making it ideal for remote or mobile systems. And event-based vision operates with consistent effectiveness regardless of the lighting conditions, often an issue in industrial environments where light quality can vary.

Using event-based vision methods enables sensors to be built with much higher dynamic ranges than is usually the case, because each pixel automatically adapts to the incident light. For this reason, event-based sensors aren’t blinded by high contrast, such as a car’s headlights at night, in the way that a conventional sensor would be. And, event-based sensing allows relatively cost-efficient sensors to record events that would otherwise require conventional cameras running at up to tens of thousands of frames per second.

This approach to vision sensing can improve overall factory throughput by bringing ultra high-speed, real-time and affordable vision to manufacturing. It enables more effective quality control and maintenance to ensure efficient operations. This includes manufacturing process control by analyzing equipment process deviations through kinetic or vibration monitoring.

A unique way of addressing predictive maintenance issues in industry 4.0

Event-based vision sensors are highly applicable to many industrial applications that require machine vision, especially those that involve highly dynamic images, including surveillance, tracking, counting, equipment monitoring, robotics and more. Here are just a few specific uses cases.

  • High Speed counting
    The approach allows for unprecedented counting speed that can reach a thousand counts per second. A traditional frame-based approach will capture the whole scene at a fixed, predefined frame rate, without taking the scene dynamics into account. The overall system load is unnecessarily high and limits maximal speed potential. Event-based vision allows for the object’s motion to define the camera rate dynamically, in real-time. Event-based algorithms can track objects smoothly and extract geometry even at very high speeds. In addition, vision processes like counting and tracking can be realized on modest computing systems, including at the edge.
  • Vibration monitoring/predictive maintenance
    Event-based vision systems can improve predictive maintenance by measuring and monitoring equipment vibrations from 1Hz to 10kHz remotely, continuously, and in real time under a wide variety of challenging lighting conditions. The mechanical state, integrity and robustness of a piece of mechanical equipment can be inferred from the vibration frequencies. Variation in vibration of production equipment is a primary indicator of deviation from its normal production set point. This information allows maintenance team to observe and understand any process deviation long before machines malfunction or break down. Event-based vision technology helps understand the vibration pattern of equipment at different points and monitor deviations from the standard operation.

Event-based vision can be integrated as part of a modern machine vision system and customized for specific uses and applications.

Changing how machines see and the value we get from them

Event-based vision provides a new level of efficiency in how machines can discover important information from almost any process. Indeed, in many instances it can reveal to machines what was previously invisible to them. It has truly breakthrough potential to bring more value to critical tasks in manufacturing, packaging, logistics, surveillance and more. If looking at a conventional video is like being handed a sequence of postcards by a friend and being asked to work out what is changing by flicking through them, an event-driven sensor’s output is more like looking at a single postcard while that friend uses a highlighter to mark important changes in the scene as they happen—no matter the lighting conditions in the scene.

Opening Image Sources:
Selman Keles
/ iStock / Getty Images Plus via Getty Images.
Morsa Images / E+ via Getty Images.

Luca Verre is CEO and co-founder of Prophesee, SA.

Luca Verre

March 2021 | Volume 60 | Number 3

Line, Trademark, Brand, Logo, Text, Font
Majorelle blue, Line, Azure, Font, Text