Technology

Deep Science: Keeping AI Honest in Medicine, Climate Science and Vision

Research papers come very often to anyone reading these. This is especially true of machine learning, which now applies to virtually every industry and organization (and creates paperwork). The purpose of this column is to collect some interesting recent discoveries and papers – not limited to artificial intelligence in particular – and explain why they matter. This week we have a number of entries in the machine learning system aimed at identifying or confirming biases or fraudulent behaviors, or data failures by supporting them.

But first a perfectly interesting project from the University of Washington is being presented at a conference on computer vision and pattern recognition. They trained a system that recognizes and predicts the flow of water, clouds, smoke and other liquid properties in photos, animating from a single static image. The result is great:

Bring Out the Artist in you with This Top-Rated Character Art Training
Deep Science: Keeping AI Honest in Medicine, Climate Science and Vision

Why though? Well, as a matter of fact, the future of photography is code, and the better our cameras understand the world they are in, the better they can integrate or recreate it. Fake river flows are not in high demand, but accurately predict movement and behavior of common photo features. An important question to answer when creating and implementing any machine learning system is whether it is actually doing what you want it to do. The history of “AI” has been trimmed with examples of models that look like they’re actually sort of doing something without doing it, like kicking everything under a baby’s bed when they’re supposed to clean their house.

This is a serious problem in treatment, where any system that is destroying it can have serious consequences. A study from UW also found that the models proposed in the literature have a tendency to do so, which researcher’s call “shortcut learning.” These shortcuts can be simple – for example, putting X-ray risks on the patient’s demographics rather than image data – or more unique, such as relying heavily on hospital conditions such as its data, making it impossible to generalize. The team discovered that many models basically fail when used in their densets that are different than their training. They are hopeful that advances in machine learning transparency (opening the “black box”) will make it easier to say when these measures will deviate from the rules.