With the rapid growth of self-driving vehicles, automobiles demand diverse data from multiple sensors to perceive the surrounding environment. Calibrating preprocessing between multiple sensors is necessary to utilize the data effectively. In particular, the LiDAR-camera pair, a suitable complement with 2D-3D information for each other, has been widely used in autonomous vehicles. Most traditional calibration methods require specific calibration targets set up under complicated environmental conditions, which require expensive human manual work. In this study, we propose a deep neural network that does not require any specific targets and offline setup to find the six degrees of freedom (6 DoF) transformation between LiDAR and the camera. Unlike previous deep learning CNN-based methods, which use raw 3D point clouds and 2D images frame by frame, CalibBD utilizes Bi-LSTM for sequence data to extract temporal features between consecutive frames. It not only predicts the calibration parameters by minimizing both transformation and depth losses but also calibrates the camera parameters by using temporal loss to refine the calibration parameters. The proposed model achieves a steady performance under various deviations of mis-calibration parameters and achieved higher results in terms of accuracy than the state-of-the-art CNN-based method on the KITTI datasets.