Researchers have created the first in-sensor processor that can be integrated into commercial silicon imaging sensor chips, also known as complementary metal-oxide-semiconductor (CMOS) image sensors, which are used in nearly all commercial devices that require visual data capture, including smartphones.
Accidents can happen in the blink of an eye, as any driver knows, so when it comes to the camera system in autonomous vehicles, processing time is critical. The time it takes the system to capture an image and send the data to the microprocessor for image processing could mean the difference between avoiding an obstacle and being involved in a major accident.
In-sensor image processing, which extracts important features from raw data by the image sensor itself rather than a separate microprocessor, can speed up visual processing. To date, demonstrations of in-sensor processing have been limited to emerging research materials that are difficult to incorporate into commercial systems, at least for the time being.
Our work can harnesses the mainstream semiconductor electronics industry to rapidly bring in-sensor computing to a wide variety of real-world applications.Donhee Ham
Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) researchers have developed the first in-sensor processor that could be integrated into commercial silicon imaging sensor chips, known as complementary metal-oxide-semiconductor (CMOS) image sensors, which are used in nearly all commercial devices that need to capture visual information, including smartphones.
The research is published in Nature Electronics.
“Our work can harnesses the mainstream semiconductor electronics industry to rapidly bring in-sensor computing to a wide variety of real-world applications,” said Donhee Ham, the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and senior author of the paper.
Ham and his team developed a silicon photodiode array. Commercially-available image sensing chips also have a silicon photodiode array to capture images, but the team’s photodiodes are electrostatically doped, meaning that sensitivity of individual photodiodes, or pixels, to incoming light can be tuned by voltages. An array that connects multiple voltage-tunable photodiodes together can perform an analog version of multiplication and addition operations central to many image processing pipelines, extracting the relevant visual information as soon as the image is captured.
“These dynamic photodiodes can concurrently filter images as they are captured, allowing the first stage of vision processing to be moved from the microprocessor to the sensor itself,” explained Houk Jang, the paper’s first author and a postdoctoral fellow at SEAS.
For various applications, the silicon photodiode array can be programmed into various image filters to remove unnecessary details or noise. A high-pass filter, for example, may be required by an imaging system in an autonomous vehicle to track lane markings, whereas other applications may require a blurring filter to reduce noise.
“Looking ahead, we foresee the use of this silicon-based in-sensor processor not only in machine vision applications, but also in bio-inspired applications, wherein early information processing allows for the co-location of sensor and compute units, like in the brain,” said Henry Hinton, a graduate student at SEAS and co-first author of the paper.
The team’s next goal is to increase photodiode density and integrate them with silicon integrated circuits.
“By replacing the standard non-programmable pixels in commercial silicon image sensors with the programmable pixels developed here, imaging devices can intelligently trim out unnecessary data, making them more efficient in both energy and bandwidth to address the demands of the next generation of sensory applications,” Jang explained.