Siamese trackers, coupled with an efficient cross-correlation layer and benefiting from potent convolutional network technology, draw continuous interest in the field of visual object tracking. However, previous Siamese trackers may suffer from inconsistency of predictions that results in degraded performance during tracking: most of the prevalent Siamese networks employ two parallel branches for different subtasks whereas the corresponding outputs may mismatch with each other to some extent. To attack this issue, we advance a two-stage Siamese tracker named SiamPA for accurate object tracking. It employs center-based anchor-free heads in the first stage for preliminary predictions, meanwhile taking the carefully designed Prediction Alignment and Refinement module (PAR) as the second stage to refine the first-stage output. The PAR module is designed for Alignment and Refinement of multi-branch prediction, which works subtly in a mini-Siam manner. It is equipped with two different prediction branches: one used to align the multiple predictions induced in the first stage and the other to adjust coordinates of proposals. Extensive experiments are conducted to demonstrate the effectiveness of our SiamPA, showing that it achieves favorable performance on several prevalent benchmark datasets. Particularly, SiamPA achieves desirable performance while running at 67 FPS, which is far beyond real-time speed.