Let’s face it: humans are the dunces of the animal kingdom. We can’t smell things as dogs or bees can; we can’t hear like bats can; and our primary sense, sight, pales in comparison to creatures that can perceive ultraviolet or infrared light. In truth, the only edge we have is our large old brains, not our senses. But sometimes, as a species, that’s all you need. We can’t see in the dark, but we can design infrared cameras to help us – and now, University of California, Irvine researchers have devised a means to make those pictures even more realistic.
“Some night vision systems utilise infrared light that is not visible to humans, and the pictures created are transferred to a digital display, displaying a monochrome image in the visible range,” according to a report explaining the technology, which was published this week in the journal PLOS ONE. “We wanted to create an image algorithm that employed optimal deep learning architectures to predict a visible spectrum depiction of a scene as if it were seen by a person using visible spectrum light,” the article says. “This would allow people to digitally generate a visible spectrum picture while they are otherwise in total ‘darkness’ and only exposed to infrared light.”
So, an infrared camera that can reconstruct color images? No, not the algorithm the scientists used to recreate the photos is more essential than the camera. They developed a neural network, which is a form of deep learning algorithm meant to mimic how human brains learn, and then taught it to find similarities between how pictures appear in the infrared and visible spectrums. According to the research, “we… developed a convolutional neural network using a U-Net-like architecture [an architecture meant to allow quick and precise image processing] to predict visible spectrum pictures from solely near-infrared data.” “This research is a first step in predicting human visible spectrum scenes from near-infrared illumination that is unnoticeable.”
While the reconstructed pictures are stunning, the researchers concede that this is simply a “proof-of-principle study employing printed images with a restricted optical pigment background” – or, to put it another way, it isn’t likely to be useful for long. So far, it’s just been a success in terms of faces. “Of course, human faces are a fairly restricted collection of things, if you will. Professor Adrian Hilton, Director of the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, told New Scientist that it doesn’t instantly convert to coloring a broad scene.
“As it stands now, applying the approach trained on faces to another setting is unlikely to work, and it is unlikely to accomplish anything reasonable.” An AI trained on bowls of fruit rather than faces, for example, might be confused by a random blue banana since its training would have solely contained yellow bananas, he added. AI, like so many other things, is only as smart and objective as humans make it.
While the study’s primary author, Andrew Browne, notes that these findings are preliminary, he believes that with more research, the approach might become exceedingly precise. He told New Scientist, “I believe this technique might be utilized for exact color evaluation provided the amount and diversity of data used to train the neural network is high enough to boost accuracy.” Only one question remains: how well will the new AI do against The Dress?