Machine learning for multiscale thermal simulation of powder-bed additive manufacturing

author avatar

06 Oct, 2021

Machine learning for multiscale thermal simulation of powder-bed additive manufacturing

Fabrication of high-quality metal parts through additive manufacturing requires accurate optimization of the process conditions. A viable approach for finding these is a simulation, which additionally provides a detailed understanding.

Problem Description

Fabrication of high-quality metal parts through additive manufacturing requires accurate optimization of the process conditions. A viable approach for finding these is a simulation, which additionally provides a detailed understanding. However, the computational cost is still too high and poses a barrier to this approach. A multiscale thermal simulation strategy is recently proposed by Ghanbari et al. It merges the outcomes of small-millimeter-scale computationally cheap Finite-Element (FE) simulations with a relatively large simulation domain for analysis at a reduced cost. This thesis evaluated the effectiveness of data-driven methods to act as an alternative to FE simulations at a small scale and further reduce the computational cost of thermal analysis of large parts.

The goal of this thesis was to improve on the surrogate model from the previous thesis. The surrogate model is supposed to dramatically reduce the computational cost required for calculating the thermal history of the powder under a laser pass in the millimeter domain. Due to the manufacturing process, not the entirety of the domain is filled with dense material, some part of it is filled with powder. Powder has a much lower thermal conductivity than solid material, so the surrogate model must be able to generalize for different powder distributions within the domain. The previous work had shown the promising capabilities of a combination of dimensionality reduction coupled with a polynomial chaos expansion (PCE) that models the model output as distribution and selects the most likely result. PCE is a supervised method that treats the model as a blackbox and associates a selected input with the desired output. It uses preexisting (simulation) data to create a predictive model using optimization of the PCE parameters. Due to the complex nature of the problem, the dimension does not only need to be reduced but also reconstructed after the PCE. Linear Principal Component Analysis (PCA) allows to reconstruction data using the outputs of the training data. However, the blackbox approach of PCE always requires some training data from simulations or experiments. This implies, that once the simulation parameters of interest are changing, a new training data set needs to be created. This means added computational costs.

Solution Description

This thesis asked if it is possible to find a machine learning algorithm, that can simulate the temperature evolution of a high-power laser moving over metal, without the need of training data. The relatively new algorithm called "Physics Informed Neural Networks" (PINN) uses the auto differentiation feature of Artificial Neural Nets(ANN) to solve the 3d-heat equation (PDE) that is also solved by the FEM-solver. This way the PINN treats the PDE-solver as blackbox and the PDE-residual as loss. It then approximates the best solver by reducing the PDE-residual. The advantage of this is that no training data is needed at all, only an implementation of the PDE. The PINN then samples inputs to the PDE and trains itself (Mishra and Molinaro 2020).

The general infrastructure of the problem statement PDE was implemented. However, due to the novelty, it required a simplified problem statement to understand the algorithm better we removed all temperature dependency.

Using a simplified problem was good practice as this unveiled the importance of a proper sampling strategy. Simply sampling equally all over the domain requires very large amounts of points for a dense cover. The laser source has a very small spread compared to the dimension of the domain (only 0.00454%), and within the laser, there is a Gaussian distribution of power density. This means the location with the highest power is even smaller. However, enough evaluations at this location are necessary for the PINN to learn the behavior of the laser source. Once this was discovered, an adaptive sampling strategy was implemented. The strategy samples more points where the laser center is and moves with the laser.
This made the PINN able to predict with a maximum error of 3% without any training data. When implementing powder pockets, the location of the powder has to become a network input, so it is important to gain experience with parametrized models. Additionally, it might give further insight into what happens when dealing with variable conductivity. The result was satisfying as all evaluated conductivities fulfilled the acceptance criteria of max. 5% relative error and an average error of around 3%. It proved the promise of cheap sensitivity analysis. This is due to the fact that a parametric model can smoothly change the parameter value and repeat analysis on the entire domain without the need for additional training.


For seeing the Code and the full Thesis, please go to https://gitlab.ethz.ch/olemuell/laser-thermal-pinn
This thesis was written at EMPA and ETHZ.

More about Ole Müller

Wevolver 2022