Many applications in computer vision require calibrated cameras, but identifying camera calibration parameters is a tedious task. Common methods require custom-built calibration patterns from which many images from different perspectives have to be taken. This research introduces a novel auto calibration method to reduce the work to a minimum. The method utilizes a neural network framework and learns the parameters through backpropagation and gradient descent. Three views of the same arbitrarily textured flat surface are used as an input. Two of the views are transformed to match the third reference view by plane homographies. Feature maps are extracted and the views are compared with their help. In- and extrinsic, as well as distortion parameters can then be learned by maximizing the similarity between the transformed views and the reference view. The results show that the method is able to find the calibration parameters of artificially distorted images. Results with real camera images are comparable to common methods that require planar calibration patterns, which makes the proposed method a quick alternative.