A novel method for monocular-based pose estimation of uncooperative spacecraft using keypoints specialized for a given target is presented. A set of robust keypoints are created by examining the effectiveness of existing localization algorithms by simulating and testing different perspectives. The feature extraction and matching is used to build a model of the spacecraft before the flight mission using the same feature extraction algorithms that can be used during the mission. Further, a visibility map is determined for each keypoint to aid in outlier filtering, matching, and measurement covariance estimation. For initialization and matching, a Convolutional Neural Network (CNN) is trained to generate descriptors robust to illumination, scale, and affine changes for the pre-computed keypoints. In the second part of the paper, we focus on pose determination and filtering after keypoint-to-model matching. While several approaches for pose acquisition have been formulated, we propose a novel method for tracking that makes use of a nonlinear filter, based on the spacecraft translational and rotational relative dynamics which estimates the covariance of the vision-based observations using the keypoint preprocessing information. Further, the estimated propagated covariance for each extracted feature is used for aiding the feature matching.