Controlling Digital Manufacturing with Artificial Intelligence

Controlling Digital Manufacturing with Artificial Intelligence

To avoid errors, a new computer vision system monitors the 3D printing process and adjusts the velocity and printing path. Researchers avoid the costly trial-and-error associated with setting 3D printing parameters for new materials by training the system in simulation.

Scientists and engineers are constantly creating new materials with unique properties that can be used for 3D printing, but figuring out how to print with these materials can be a difficult and expensive task. An expert operator must frequently use manual trial-and-error (possibly thousands of prints) to determine ideal parameters that consistently print a new material effectively. These parameters include printing speed and the amount of material deposited by the printer.

MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time. They used simulations to train a neural network to adjust printing parameters to minimize error, and then they applied that controller to a real 3D printer. Their system printed objects more precisely than any other 3D printing controller they tested.

The research avoids the prohibitively expensive process of printing thousands or millions of real-world objects in order to train the neural network. It may also make it easier for engineers to incorporate novel materials into their prints, allowing them to create objects with unique electrical or chemical properties. It could also assist technicians in making on-the-fly changes to the printing process if material or environmental conditions change unexpectedly.

The interesting thing we found was that, by implementing this noise model, we were able to transfer the control policy that was purely trained in simulation onto hardware without training with any physical experimentation. We didn’t need to do any fine-tuning on the actual equipment afterwards.

Dr. Foshey

Picking parameters

Determining the ideal parameters of a digital manufacturing process can be one of the most expensive parts of the process because so much trial-and-error is required. And once a technician finds a combination that works well, those parameters are only ideal for one specific situation. She has little data on how the material will behave in other environments, on different hardware, or if a new batch exhibits different properties.

Using a machine-learning system is fraught with challenges, too. First, the researchers needed to measure what was happening on the printer in real-time.

To accomplish this, they created a machine-vision system that utilized two cameras aimed at the 3D printer’s nozzle. The system shines light at the material as it is deposited and calculates the thickness of the material based on how much light passes through.

“Think of the vision system as a set of eyes watching the process in real time,” Foshey says.

The controller would then process the images received from the vision system and, based on any errors detected, adjust the feed rate and direction of the printer. However, training a neural network-based controller to understand this manufacturing process is data-intensive and would necessitate millions of prints. As a result, the researchers created a simulator instead.

Using artificial intelligence to control digital manufacturing

Successful simulation

To train their controller, they used a process known as reinforcement learning in which the model learns through trial-and-error with a reward. The model was tasked with selecting printing parameters that would create a certain object in a simulated environment. After being shown the expected output, the model was rewarded when the parameters it chose minimized the error between its print and the expected outcome.

An “error” in this case means that the model either dispensed too much material, filling in areas that should have been left open, or did not dispense enough, leaving open spots that should be filled in. As more simulated prints were performed, the model updated its control policy to maximize the reward, becoming more and more accurate.

The real world, on the other hand, is messier than a simulation. In practice, conditions usually change as a result of minor variations or noise in the printing process. As a result, the researchers developed a numerical model that approximates 3D printer noise. This model was used to add noise to the simulation, resulting in more realistic results.

“The interesting thing we found was that, by implementing this noise model, we were able to transfer the control policy that was purely trained in simulation onto hardware without training with any physical experimentation,” Foshey says. “We didn’t need to do any fine-tuning on the actual equipment afterwards.”

When they tested the controller, they discovered that it printed objects more accurately than any other control method they tried. It excelled at infill printing, which involves printing the interior of an object. Other controllers deposited so much material that the printed object bulged, but the researchers’ controller adjusted the printing path to keep the object level.

Their control policy can even learn how materials spread after depositing them and adjust parameters accordingly.

“We were also able to create control policies that could control for various materials on the fly. So, if you had a manufacturing process in the field and wanted to change the material, you wouldn’t have to revalidate it. You could just load the new material and the controller would automatically adjust,” Foshey says.

The researchers want to develop controllers for other manufacturing processes now that they have demonstrated the effectiveness of this technique for 3D printing. They’d also like to see how the approach can be modified for scenarios involving multiple layers of material or the printing of multiple materials at the same time. Furthermore, they assumed that each material has a fixed viscosity (“syrupiness”), but a future iteration could use AI to recognize and adjust for viscosity in real-time.