Learning from human demonstrations is fundamental to harnessing human intelligence in many tasks. A critical approach to learning from human demonstrations is inverse reinforcement learning, which aims to learn rewards from limited human demonstrations and then train control policies based on the learned reward. The existing inverse reinforcement learning methods perform well in less complex environments but often fail in complex high-dimensional environments. To overcome these difficulties and limitations, this paper studies the implementation of a generative adversarial imitation learning (GAIL) method that controls a quadcopter Unmanned Aerial Vehicle (UAV) to navigate between two defined positions in a virtual environment created in Unreal Engine, whose simulation environments can reflect real-world physics. We present procedures to build a customized virtual environment using the Epic game's {Unreal engine}, collect expert demonstrations, and optimize the control policy using GAIL. Finally, the simulation results discuss and explain the performance of GAIL in 3-dimensional UAV navigation.