In this paper we present an approach combining a finite element method and a deep neural network to learn complex elastic deformations with the objective of providing augmented reality during hepatic surgery. Derived from the U-Net architecture, our network is built entirely from physically-based simulations of a preoperative segmentation of the organ. These simulations are performed using an immersed-boundary method, which offers several numerical and practical benefits, such as not requiring boundary-conforming volume elements. We perform a quantitative assessment of the method using synthetic and ex vivo patient data. Results show that the network is capable of solving the deformed state of the organ using only a sparse partial surface displacement data and achieve similar accuracy as a FEM solution, while being about 100 $$\times $$ faster. When applied to an ex vivo liver example, we achieve the registration in only 3 ms with a mean target registration error (TRE) of 2.9 mm.